Test Report: Docker_Linux_crio_arm64 21808

                    
                      db33af8e7a29a5e500790b374373258f8b494afd:2025-12-17:42825
                    
                

Test fail (42/316)

Order failed test Duration
38 TestAddons/serial/Volcano 0.34
44 TestAddons/parallel/Registry 16.92
45 TestAddons/parallel/RegistryCreds 0.48
46 TestAddons/parallel/Ingress 144.12
47 TestAddons/parallel/InspektorGadget 5.29
48 TestAddons/parallel/MetricsServer 5.39
50 TestAddons/parallel/CSI 54.28
51 TestAddons/parallel/Headlamp 3.61
52 TestAddons/parallel/CloudSpanner 5.41
53 TestAddons/parallel/LocalPath 9.54
54 TestAddons/parallel/NvidiaDevicePlugin 6.35
55 TestAddons/parallel/Yakd 6.28
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 499.19
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 369.31
175 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 2.41
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 2.45
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 2.8
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 734.62
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 2.27
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 1.73
197 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 3.01
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 2.34
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 241.63
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 1.39
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.57
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup 0.08
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 94.2
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 0.06
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 0.25
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 0.26
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.26
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.25
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.47
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 2.53
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 432.1
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 3.53
293 TestJSONOutput/pause/Command 2.3
299 TestJSONOutput/unpause/Command 1.57
358 TestKubernetesUpgrade 785.71
384 TestPause/serial/Pause 6.27
468 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 7200.092
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-052340 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-052340 addons disable volcano --alsologtostderr -v=1: exit status 11 (337.759455ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:13:55.930669  495320 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:13:55.932154  495320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:13:55.932174  495320 out.go:374] Setting ErrFile to fd 2...
	I1217 20:13:55.932181  495320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:13:55.932462  495320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:13:55.932802  495320 mustload.go:66] Loading cluster: addons-052340
	I1217 20:13:55.933217  495320 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:13:55.933237  495320 addons.go:622] checking whether the cluster is paused
	I1217 20:13:55.933355  495320 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:13:55.933371  495320 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:13:55.933883  495320 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:13:55.955179  495320 ssh_runner.go:195] Run: systemctl --version
	I1217 20:13:55.955237  495320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:13:55.975554  495320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:13:56.102115  495320 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:13:56.102215  495320 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:13:56.136445  495320 cri.go:89] found id: "a40f8c4c9d66754b6dd5a9559839fcf1fe8c00a06c71ce2f09926993d2c3d9eb"
	I1217 20:13:56.136477  495320 cri.go:89] found id: "5b40645a2f29661a7d2787522f23071b2c894de3a6b5b7638a2111a3f9485374"
	I1217 20:13:56.136483  495320 cri.go:89] found id: "4a7c07a0b97549aa9a9ce8e8e19127c9c8cf4e24abdccd8f61dcd057b97c9d88"
	I1217 20:13:56.136487  495320 cri.go:89] found id: "d3baa47458bc085e8d55815cf0497461a119e95081ce45cce1d577901a7dbf8d"
	I1217 20:13:56.136490  495320 cri.go:89] found id: "fcc9a7828f6b875df447059eabaf032b1c2d17e3c003bd1948f528d321bd3c85"
	I1217 20:13:56.136493  495320 cri.go:89] found id: "765f61bbb3ba86834abad3ede0a99451b487e6dc73b004c63a20a8f02e1d84a5"
	I1217 20:13:56.136497  495320 cri.go:89] found id: "d33875f7a8b74239db487ffa04bcf41c083f1ad65f29c43e2ae8b0dd783b521c"
	I1217 20:13:56.136500  495320 cri.go:89] found id: "79a9e5f9433489a03f4b2296098c887736f30cde921105d6ffa972cf7406abb4"
	I1217 20:13:56.136503  495320 cri.go:89] found id: "8d7f5c62629eb9e2812369690fb943f4e57ef24985d574b704ffc9b79a56d549"
	I1217 20:13:56.136510  495320 cri.go:89] found id: "af528db69b5d34f2ebcd038272fbfe9866e82f22a612276fb737683d83c256b9"
	I1217 20:13:56.136514  495320 cri.go:89] found id: "028c23d163f91b2b1e5d8071a70c7fb133ac203b414302242719cd40dba8d733"
	I1217 20:13:56.136517  495320 cri.go:89] found id: "85713e1610062bb4691b694312bf4faa71e96232dd94bc9fc564053646cde3a0"
	I1217 20:13:56.136521  495320 cri.go:89] found id: "cb82734cbf7f969e92c5a0b06400eb1d90d484748c850c4888051348e720776e"
	I1217 20:13:56.136524  495320 cri.go:89] found id: "f60e3e143ad43d5768067a6b27a4ece464edd1136c6931405c68bea040dd1097"
	I1217 20:13:56.136527  495320 cri.go:89] found id: "18d9f4acabfa7df7f42816d2545211aeda7f7362f2af1f80e188ea780addc6f8"
	I1217 20:13:56.136533  495320 cri.go:89] found id: "fead2bfafa7367c08236e1dd9040e848a879a067c3b70a78578d71429854ebf6"
	I1217 20:13:56.136539  495320 cri.go:89] found id: "2b6e269a93d8bccdf1b28bd4369628c766dbf092b4be199c7da9d8d756358c68"
	I1217 20:13:56.136543  495320 cri.go:89] found id: "7fd15b6b594715c56d64ee54d7a6852f04c749cfdf1c08a4bcf60d7a2e38d3b6"
	I1217 20:13:56.136546  495320 cri.go:89] found id: "6311d7d7f6f04d57961651f5acfdebe6792efa3db55df12e18440d726be5abcf"
	I1217 20:13:56.136549  495320 cri.go:89] found id: "6481ede2a21b9816559407d8564d57427a11b523832053f995eb07f4d1ce83bc"
	I1217 20:13:56.136554  495320 cri.go:89] found id: "873499eab93d6e655f10d1bf2c19abe29efc74e958b7418723b5c7c02699e059"
	I1217 20:13:56.136558  495320 cri.go:89] found id: "45a4f23a594c0e4311b2eac64abad5f26d74229e5aa0a19e11b7b7c18f0e227d"
	I1217 20:13:56.136561  495320 cri.go:89] found id: "dd0351330b604610224568f93b842fec24e6e4e932b309a6741ffab6f0d17b9f"
	I1217 20:13:56.136563  495320 cri.go:89] found id: ""
	I1217 20:13:56.136616  495320 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:13:56.165027  495320 out.go:203] 
	W1217 20:13:56.167999  495320 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:13:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:13:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 20:13:56.168043  495320 out.go:285] * 
	* 
	W1217 20:13:56.174387  495320 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:13:56.177282  495320 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-052340 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 6.240445ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-h2xmf" [2534706e-f0bb-4990-85ac-495c0ace51cc] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004356031s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-5q5m2" [ae7a6b86-2960-405a-9d77-ff957fe9411a] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004202311s
addons_test.go:394: (dbg) Run:  kubectl --context addons-052340 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-052340 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-052340 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.361056302s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-052340 ip
2025/12/17 20:14:23 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-052340 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-052340 addons disable registry --alsologtostderr -v=1: exit status 11 (290.969467ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:14:23.177570  495889 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:14:23.178378  495889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:14:23.178399  495889 out.go:374] Setting ErrFile to fd 2...
	I1217 20:14:23.178406  495889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:14:23.178732  495889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:14:23.179097  495889 mustload.go:66] Loading cluster: addons-052340
	I1217 20:14:23.179531  495889 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:14:23.179553  495889 addons.go:622] checking whether the cluster is paused
	I1217 20:14:23.179751  495889 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:14:23.179773  495889 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:14:23.180372  495889 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:14:23.200295  495889 ssh_runner.go:195] Run: systemctl --version
	I1217 20:14:23.200345  495889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:14:23.225449  495889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:14:23.323122  495889 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:14:23.323213  495889 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:14:23.361698  495889 cri.go:89] found id: "a40f8c4c9d66754b6dd5a9559839fcf1fe8c00a06c71ce2f09926993d2c3d9eb"
	I1217 20:14:23.361728  495889 cri.go:89] found id: "5b40645a2f29661a7d2787522f23071b2c894de3a6b5b7638a2111a3f9485374"
	I1217 20:14:23.361734  495889 cri.go:89] found id: "4a7c07a0b97549aa9a9ce8e8e19127c9c8cf4e24abdccd8f61dcd057b97c9d88"
	I1217 20:14:23.361737  495889 cri.go:89] found id: "d3baa47458bc085e8d55815cf0497461a119e95081ce45cce1d577901a7dbf8d"
	I1217 20:14:23.361741  495889 cri.go:89] found id: "fcc9a7828f6b875df447059eabaf032b1c2d17e3c003bd1948f528d321bd3c85"
	I1217 20:14:23.361744  495889 cri.go:89] found id: "765f61bbb3ba86834abad3ede0a99451b487e6dc73b004c63a20a8f02e1d84a5"
	I1217 20:14:23.361747  495889 cri.go:89] found id: "d33875f7a8b74239db487ffa04bcf41c083f1ad65f29c43e2ae8b0dd783b521c"
	I1217 20:14:23.361750  495889 cri.go:89] found id: "79a9e5f9433489a03f4b2296098c887736f30cde921105d6ffa972cf7406abb4"
	I1217 20:14:23.361753  495889 cri.go:89] found id: "8d7f5c62629eb9e2812369690fb943f4e57ef24985d574b704ffc9b79a56d549"
	I1217 20:14:23.361760  495889 cri.go:89] found id: "af528db69b5d34f2ebcd038272fbfe9866e82f22a612276fb737683d83c256b9"
	I1217 20:14:23.361763  495889 cri.go:89] found id: "028c23d163f91b2b1e5d8071a70c7fb133ac203b414302242719cd40dba8d733"
	I1217 20:14:23.361766  495889 cri.go:89] found id: "85713e1610062bb4691b694312bf4faa71e96232dd94bc9fc564053646cde3a0"
	I1217 20:14:23.361770  495889 cri.go:89] found id: "cb82734cbf7f969e92c5a0b06400eb1d90d484748c850c4888051348e720776e"
	I1217 20:14:23.361788  495889 cri.go:89] found id: "f60e3e143ad43d5768067a6b27a4ece464edd1136c6931405c68bea040dd1097"
	I1217 20:14:23.361794  495889 cri.go:89] found id: "18d9f4acabfa7df7f42816d2545211aeda7f7362f2af1f80e188ea780addc6f8"
	I1217 20:14:23.361805  495889 cri.go:89] found id: "fead2bfafa7367c08236e1dd9040e848a879a067c3b70a78578d71429854ebf6"
	I1217 20:14:23.361812  495889 cri.go:89] found id: "2b6e269a93d8bccdf1b28bd4369628c766dbf092b4be199c7da9d8d756358c68"
	I1217 20:14:23.361817  495889 cri.go:89] found id: "7fd15b6b594715c56d64ee54d7a6852f04c749cfdf1c08a4bcf60d7a2e38d3b6"
	I1217 20:14:23.361820  495889 cri.go:89] found id: "6311d7d7f6f04d57961651f5acfdebe6792efa3db55df12e18440d726be5abcf"
	I1217 20:14:23.361823  495889 cri.go:89] found id: "6481ede2a21b9816559407d8564d57427a11b523832053f995eb07f4d1ce83bc"
	I1217 20:14:23.361827  495889 cri.go:89] found id: "873499eab93d6e655f10d1bf2c19abe29efc74e958b7418723b5c7c02699e059"
	I1217 20:14:23.361831  495889 cri.go:89] found id: "45a4f23a594c0e4311b2eac64abad5f26d74229e5aa0a19e11b7b7c18f0e227d"
	I1217 20:14:23.361834  495889 cri.go:89] found id: "dd0351330b604610224568f93b842fec24e6e4e932b309a6741ffab6f0d17b9f"
	I1217 20:14:23.361837  495889 cri.go:89] found id: ""
	I1217 20:14:23.361889  495889 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:14:23.386348  495889 out.go:203] 
	W1217 20:14:23.395420  495889 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:14:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:14:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 20:14:23.395449  495889 out.go:285] * 
	* 
	W1217 20:14:23.401457  495889 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:14:23.406138  495889 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-052340 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.92s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.48s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.694583ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-052340
addons_test.go:334: (dbg) Run:  kubectl --context addons-052340 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-052340 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-052340 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (265.36519ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:15:23.373245  497920 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:15:23.374130  497920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:15:23.374149  497920 out.go:374] Setting ErrFile to fd 2...
	I1217 20:15:23.374156  497920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:15:23.374558  497920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:15:23.377547  497920 mustload.go:66] Loading cluster: addons-052340
	I1217 20:15:23.377959  497920 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:15:23.377971  497920 addons.go:622] checking whether the cluster is paused
	I1217 20:15:23.379480  497920 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:15:23.379515  497920 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:15:23.380117  497920 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:15:23.399421  497920 ssh_runner.go:195] Run: systemctl --version
	I1217 20:15:23.399477  497920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:15:23.417391  497920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:15:23.514511  497920 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:15:23.514590  497920 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:15:23.553147  497920 cri.go:89] found id: "a40f8c4c9d66754b6dd5a9559839fcf1fe8c00a06c71ce2f09926993d2c3d9eb"
	I1217 20:15:23.553171  497920 cri.go:89] found id: "5b40645a2f29661a7d2787522f23071b2c894de3a6b5b7638a2111a3f9485374"
	I1217 20:15:23.553176  497920 cri.go:89] found id: "4a7c07a0b97549aa9a9ce8e8e19127c9c8cf4e24abdccd8f61dcd057b97c9d88"
	I1217 20:15:23.553181  497920 cri.go:89] found id: "d3baa47458bc085e8d55815cf0497461a119e95081ce45cce1d577901a7dbf8d"
	I1217 20:15:23.553184  497920 cri.go:89] found id: "fcc9a7828f6b875df447059eabaf032b1c2d17e3c003bd1948f528d321bd3c85"
	I1217 20:15:23.553188  497920 cri.go:89] found id: "765f61bbb3ba86834abad3ede0a99451b487e6dc73b004c63a20a8f02e1d84a5"
	I1217 20:15:23.553191  497920 cri.go:89] found id: "d33875f7a8b74239db487ffa04bcf41c083f1ad65f29c43e2ae8b0dd783b521c"
	I1217 20:15:23.553193  497920 cri.go:89] found id: "79a9e5f9433489a03f4b2296098c887736f30cde921105d6ffa972cf7406abb4"
	I1217 20:15:23.553197  497920 cri.go:89] found id: "8d7f5c62629eb9e2812369690fb943f4e57ef24985d574b704ffc9b79a56d549"
	I1217 20:15:23.553203  497920 cri.go:89] found id: "af528db69b5d34f2ebcd038272fbfe9866e82f22a612276fb737683d83c256b9"
	I1217 20:15:23.553206  497920 cri.go:89] found id: "028c23d163f91b2b1e5d8071a70c7fb133ac203b414302242719cd40dba8d733"
	I1217 20:15:23.553209  497920 cri.go:89] found id: "85713e1610062bb4691b694312bf4faa71e96232dd94bc9fc564053646cde3a0"
	I1217 20:15:23.553213  497920 cri.go:89] found id: "cb82734cbf7f969e92c5a0b06400eb1d90d484748c850c4888051348e720776e"
	I1217 20:15:23.553216  497920 cri.go:89] found id: "f60e3e143ad43d5768067a6b27a4ece464edd1136c6931405c68bea040dd1097"
	I1217 20:15:23.553219  497920 cri.go:89] found id: "18d9f4acabfa7df7f42816d2545211aeda7f7362f2af1f80e188ea780addc6f8"
	I1217 20:15:23.553224  497920 cri.go:89] found id: "fead2bfafa7367c08236e1dd9040e848a879a067c3b70a78578d71429854ebf6"
	I1217 20:15:23.553232  497920 cri.go:89] found id: "2b6e269a93d8bccdf1b28bd4369628c766dbf092b4be199c7da9d8d756358c68"
	I1217 20:15:23.553237  497920 cri.go:89] found id: "7fd15b6b594715c56d64ee54d7a6852f04c749cfdf1c08a4bcf60d7a2e38d3b6"
	I1217 20:15:23.553240  497920 cri.go:89] found id: "6311d7d7f6f04d57961651f5acfdebe6792efa3db55df12e18440d726be5abcf"
	I1217 20:15:23.553243  497920 cri.go:89] found id: "6481ede2a21b9816559407d8564d57427a11b523832053f995eb07f4d1ce83bc"
	I1217 20:15:23.553247  497920 cri.go:89] found id: "873499eab93d6e655f10d1bf2c19abe29efc74e958b7418723b5c7c02699e059"
	I1217 20:15:23.553253  497920 cri.go:89] found id: "45a4f23a594c0e4311b2eac64abad5f26d74229e5aa0a19e11b7b7c18f0e227d"
	I1217 20:15:23.553255  497920 cri.go:89] found id: "dd0351330b604610224568f93b842fec24e6e4e932b309a6741ffab6f0d17b9f"
	I1217 20:15:23.553258  497920 cri.go:89] found id: ""
	I1217 20:15:23.553308  497920 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:15:23.568663  497920 out.go:203] 
	W1217 20:15:23.571715  497920 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:15:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:15:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 20:15:23.571745  497920 out.go:285] * 
	* 
	W1217 20:15:23.577438  497920 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:15:23.580417  497920 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-052340 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.48s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-052340 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-052340 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-052340 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [ad16b424-c4cb-44fa-98f9-07a700b7df1b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [ad16b424-c4cb-44fa-98f9-07a700b7df1b] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003766209s
I1217 20:14:52.565577  488412 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-052340 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-052340 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.973670376s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-052340 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-052340 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-052340
helpers_test.go:244: (dbg) docker inspect addons-052340:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b27951342508b91edb73f9914fc67563412a9b1c899ae0a2e16b6aa92b97dafa",
	        "Created": "2025-12-17T20:11:52.64290744Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 489812,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:11:52.704179967Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/b27951342508b91edb73f9914fc67563412a9b1c899ae0a2e16b6aa92b97dafa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b27951342508b91edb73f9914fc67563412a9b1c899ae0a2e16b6aa92b97dafa/hostname",
	        "HostsPath": "/var/lib/docker/containers/b27951342508b91edb73f9914fc67563412a9b1c899ae0a2e16b6aa92b97dafa/hosts",
	        "LogPath": "/var/lib/docker/containers/b27951342508b91edb73f9914fc67563412a9b1c899ae0a2e16b6aa92b97dafa/b27951342508b91edb73f9914fc67563412a9b1c899ae0a2e16b6aa92b97dafa-json.log",
	        "Name": "/addons-052340",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-052340:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-052340",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b27951342508b91edb73f9914fc67563412a9b1c899ae0a2e16b6aa92b97dafa",
	                "LowerDir": "/var/lib/docker/overlay2/41d14526c9f9e2c943fa6c174fd3eacaacc70a9877d3ba01090549f4673e6a14-init/diff:/var/lib/docker/overlay2/c4759b344ecb109c83d66e4ef56c76903f1d1e597efb86ab2d2c7911b1130a8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/41d14526c9f9e2c943fa6c174fd3eacaacc70a9877d3ba01090549f4673e6a14/merged",
	                "UpperDir": "/var/lib/docker/overlay2/41d14526c9f9e2c943fa6c174fd3eacaacc70a9877d3ba01090549f4673e6a14/diff",
	                "WorkDir": "/var/lib/docker/overlay2/41d14526c9f9e2c943fa6c174fd3eacaacc70a9877d3ba01090549f4673e6a14/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-052340",
	                "Source": "/var/lib/docker/volumes/addons-052340/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-052340",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-052340",
	                "name.minikube.sigs.k8s.io": "addons-052340",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bf40f6023f9adc85017d09c172eba670e4306c6dafedce644fc3f08c08da1e32",
	            "SandboxKey": "/var/run/docker/netns/bf40f6023f9a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-052340": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:14:c3:fc:f0:b4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d76a8eefec5c23c1ba4193d7d9ab608b42400bf214e60dbe902877081ec089a0",
	                    "EndpointID": "0b85bcda4414468275eb68480bcf440a128467b53a640057593867b94979d6f7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-052340",
	                        "b27951342508"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-052340 -n addons-052340
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-052340 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-052340 logs -n 25: (1.656777291s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-133846                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-133846 │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │ 17 Dec 25 20:11 UTC │
	│ start   │ --download-only -p binary-mirror-177191 --alsologtostderr --binary-mirror http://127.0.0.1:41401 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-177191   │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │                     │
	│ delete  │ -p binary-mirror-177191                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-177191   │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │ 17 Dec 25 20:11 UTC │
	│ addons  │ enable dashboard -p addons-052340                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │                     │
	│ addons  │ disable dashboard -p addons-052340                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │                     │
	│ start   │ -p addons-052340 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │ 17 Dec 25 20:13 UTC │
	│ addons  │ addons-052340 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:13 UTC │                     │
	│ addons  │ addons-052340 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │                     │
	│ addons  │ addons-052340 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │                     │
	│ addons  │ addons-052340 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │                     │
	│ ip      │ addons-052340 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │ 17 Dec 25 20:14 UTC │
	│ addons  │ addons-052340 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │                     │
	│ ssh     │ addons-052340 ssh cat /opt/local-path-provisioner/pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │ 17 Dec 25 20:14 UTC │
	│ addons  │ addons-052340 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │                     │
	│ addons  │ addons-052340 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │                     │
	│ addons  │ enable headlamp -p addons-052340 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │                     │
	│ addons  │ addons-052340 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │                     │
	│ addons  │ addons-052340 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │                     │
	│ addons  │ addons-052340 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │                     │
	│ ssh     │ addons-052340 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │                     │
	│ addons  │ addons-052340 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:15 UTC │                     │
	│ addons  │ addons-052340 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:15 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-052340                                                                                                                                                                                                                                                                                                                                                                                           │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:15 UTC │ 17 Dec 25 20:15 UTC │
	│ addons  │ addons-052340 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:15 UTC │                     │
	│ ip      │ addons-052340 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:11:46
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:11:46.376525  489418 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:11:46.376697  489418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:11:46.376729  489418 out.go:374] Setting ErrFile to fd 2...
	I1217 20:11:46.376751  489418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:11:46.377025  489418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:11:46.377550  489418 out.go:368] Setting JSON to false
	I1217 20:11:46.378393  489418 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10456,"bootTime":1765991851,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:11:46.378496  489418 start.go:143] virtualization:  
	I1217 20:11:46.380149  489418 out.go:179] * [addons-052340] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:11:46.381510  489418 notify.go:221] Checking for updates...
	I1217 20:11:46.383938  489418 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:11:46.385162  489418 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:11:46.386267  489418 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:11:46.387359  489418 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:11:46.388470  489418 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:11:46.389747  489418 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:11:46.391252  489418 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:11:46.413237  489418 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:11:46.413366  489418 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:11:46.478087  489418 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-17 20:11:46.468719631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:11:46.478196  489418 docker.go:319] overlay module found
	I1217 20:11:46.479929  489418 out.go:179] * Using the docker driver based on user configuration
	I1217 20:11:46.481354  489418 start.go:309] selected driver: docker
	I1217 20:11:46.481371  489418 start.go:927] validating driver "docker" against <nil>
	I1217 20:11:46.481393  489418 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:11:46.482161  489418 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:11:46.536044  489418 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-17 20:11:46.5267006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:11:46.536205  489418 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 20:11:46.536438  489418 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:11:46.537858  489418 out.go:179] * Using Docker driver with root privileges
	I1217 20:11:46.539201  489418 cni.go:84] Creating CNI manager for ""
	I1217 20:11:46.539265  489418 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:11:46.539279  489418 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 20:11:46.539358  489418 start.go:353] cluster config:
	{Name:addons-052340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-052340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1217 20:11:46.540768  489418 out.go:179] * Starting "addons-052340" primary control-plane node in "addons-052340" cluster
	I1217 20:11:46.542021  489418 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:11:46.543869  489418 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:11:46.545388  489418 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:11:46.545398  489418 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:11:46.545433  489418 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1217 20:11:46.545449  489418 cache.go:65] Caching tarball of preloaded images
	I1217 20:11:46.545533  489418 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:11:46.545543  489418 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:11:46.545893  489418 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/config.json ...
	I1217 20:11:46.545914  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/config.json: {Name:mk1f94198e9fff9e1603e7d6d656a228af0111a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:11:46.564988  489418 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:11:46.565012  489418 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:11:46.565028  489418 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:11:46.565059  489418 start.go:360] acquireMachinesLock for addons-052340: {Name:mk6a23b5fdd10e06656251611d99d4457cfa70cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:11:46.565166  489418 start.go:364] duration metric: took 88.181µs to acquireMachinesLock for "addons-052340"
	I1217 20:11:46.565197  489418 start.go:93] Provisioning new machine with config: &{Name:addons-052340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-052340 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:11:46.565277  489418 start.go:125] createHost starting for "" (driver="docker")
	I1217 20:11:46.566877  489418 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1217 20:11:46.567107  489418 start.go:159] libmachine.API.Create for "addons-052340" (driver="docker")
	I1217 20:11:46.567141  489418 client.go:173] LocalClient.Create starting
	I1217 20:11:46.567257  489418 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem
	I1217 20:11:46.708126  489418 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem
	I1217 20:11:47.009553  489418 cli_runner.go:164] Run: docker network inspect addons-052340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 20:11:47.026907  489418 cli_runner.go:211] docker network inspect addons-052340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 20:11:47.027005  489418 network_create.go:284] running [docker network inspect addons-052340] to gather additional debugging logs...
	I1217 20:11:47.027028  489418 cli_runner.go:164] Run: docker network inspect addons-052340
	W1217 20:11:47.044833  489418 cli_runner.go:211] docker network inspect addons-052340 returned with exit code 1
	I1217 20:11:47.044863  489418 network_create.go:287] error running [docker network inspect addons-052340]: docker network inspect addons-052340: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-052340 not found
	I1217 20:11:47.044891  489418 network_create.go:289] output of [docker network inspect addons-052340]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-052340 not found
	
	** /stderr **
	I1217 20:11:47.045010  489418 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:11:47.062646  489418 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a44f20}
	I1217 20:11:47.062708  489418 network_create.go:124] attempt to create docker network addons-052340 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1217 20:11:47.062766  489418 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-052340 addons-052340
	I1217 20:11:47.125107  489418 network_create.go:108] docker network addons-052340 192.168.49.0/24 created
	I1217 20:11:47.125144  489418 kic.go:121] calculated static IP "192.168.49.2" for the "addons-052340" container
	I1217 20:11:47.125242  489418 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 20:11:47.142547  489418 cli_runner.go:164] Run: docker volume create addons-052340 --label name.minikube.sigs.k8s.io=addons-052340 --label created_by.minikube.sigs.k8s.io=true
	I1217 20:11:47.160469  489418 oci.go:103] Successfully created a docker volume addons-052340
	I1217 20:11:47.160556  489418 cli_runner.go:164] Run: docker run --rm --name addons-052340-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-052340 --entrypoint /usr/bin/test -v addons-052340:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 20:11:48.607937  489418 cli_runner.go:217] Completed: docker run --rm --name addons-052340-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-052340 --entrypoint /usr/bin/test -v addons-052340:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.447330667s)
	I1217 20:11:48.607969  489418 oci.go:107] Successfully prepared a docker volume addons-052340
	I1217 20:11:48.608014  489418 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:11:48.608023  489418 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 20:11:48.608095  489418 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-052340:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 20:11:52.573544  489418 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-052340:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.965397174s)
	I1217 20:11:52.573576  489418 kic.go:203] duration metric: took 3.96554842s to extract preloaded images to volume ...
	W1217 20:11:52.573712  489418 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 20:11:52.573824  489418 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 20:11:52.627936  489418 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-052340 --name addons-052340 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-052340 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-052340 --network addons-052340 --ip 192.168.49.2 --volume addons-052340:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 20:11:52.929576  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Running}}
	I1217 20:11:52.950604  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:11:52.975745  489418 cli_runner.go:164] Run: docker exec addons-052340 stat /var/lib/dpkg/alternatives/iptables
	I1217 20:11:53.039386  489418 oci.go:144] the created container "addons-052340" has a running status.
	I1217 20:11:53.039414  489418 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa...
	I1217 20:11:53.405617  489418 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 20:11:53.427967  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:11:53.454843  489418 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 20:11:53.454869  489418 kic_runner.go:114] Args: [docker exec --privileged addons-052340 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 20:11:53.524582  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:11:53.552457  489418 machine.go:94] provisionDockerMachine start ...
	I1217 20:11:53.552543  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:11:53.580175  489418 main.go:143] libmachine: Using SSH client type: native
	I1217 20:11:53.580495  489418 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1217 20:11:53.580511  489418 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:11:53.581114  489418 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59762->127.0.0.1:33163: read: connection reset by peer
	I1217 20:11:56.714967  489418 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-052340
	
	I1217 20:11:56.714995  489418 ubuntu.go:182] provisioning hostname "addons-052340"
	I1217 20:11:56.715056  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:11:56.733198  489418 main.go:143] libmachine: Using SSH client type: native
	I1217 20:11:56.733576  489418 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1217 20:11:56.733600  489418 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-052340 && echo "addons-052340" | sudo tee /etc/hostname
	I1217 20:11:56.876817  489418 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-052340
	
	I1217 20:11:56.876899  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:11:56.894344  489418 main.go:143] libmachine: Using SSH client type: native
	I1217 20:11:56.894644  489418 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1217 20:11:56.894677  489418 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-052340' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-052340/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-052340' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:11:57.031890  489418 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:11:57.031918  489418 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:11:57.031949  489418 ubuntu.go:190] setting up certificates
	I1217 20:11:57.031959  489418 provision.go:84] configureAuth start
	I1217 20:11:57.032020  489418 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-052340
	I1217 20:11:57.049428  489418 provision.go:143] copyHostCerts
	I1217 20:11:57.049517  489418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:11:57.049649  489418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:11:57.049722  489418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:11:57.049784  489418 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.addons-052340 san=[127.0.0.1 192.168.49.2 addons-052340 localhost minikube]
	I1217 20:11:57.275424  489418 provision.go:177] copyRemoteCerts
	I1217 20:11:57.275505  489418 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:11:57.275545  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:11:57.293416  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:11:57.391686  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:11:57.409470  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 20:11:57.426555  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 20:11:57.444377  489418 provision.go:87] duration metric: took 412.405297ms to configureAuth
	I1217 20:11:57.444404  489418 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:11:57.444597  489418 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:11:57.444707  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:11:57.461730  489418 main.go:143] libmachine: Using SSH client type: native
	I1217 20:11:57.462058  489418 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1217 20:11:57.462072  489418 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:11:57.930572  489418 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:11:57.930656  489418 machine.go:97] duration metric: took 4.378178292s to provisionDockerMachine
	I1217 20:11:57.930682  489418 client.go:176] duration metric: took 11.363528665s to LocalClient.Create
	I1217 20:11:57.930728  489418 start.go:167] duration metric: took 11.363621482s to libmachine.API.Create "addons-052340"
	I1217 20:11:57.930763  489418 start.go:293] postStartSetup for "addons-052340" (driver="docker")
	I1217 20:11:57.930790  489418 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:11:57.930896  489418 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:11:57.930961  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:11:57.948396  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:11:58.048207  489418 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:11:58.051718  489418 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:11:58.051763  489418 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:11:58.051778  489418 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:11:58.051853  489418 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:11:58.051886  489418 start.go:296] duration metric: took 121.101465ms for postStartSetup
	I1217 20:11:58.052221  489418 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-052340
	I1217 20:11:58.069490  489418 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/config.json ...
	I1217 20:11:58.069789  489418 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:11:58.069849  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:11:58.088555  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:11:58.184710  489418 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:11:58.189364  489418 start.go:128] duration metric: took 11.624073501s to createHost
	I1217 20:11:58.189397  489418 start.go:83] releasing machines lock for "addons-052340", held for 11.624219742s
	I1217 20:11:58.189468  489418 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-052340
	I1217 20:11:58.206175  489418 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:11:58.206237  489418 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:11:58.206277  489418 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:11:58.206306  489418 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	W1217 20:11:58.206393  489418 start.go:789] pre-probe CA setup failed: create ca cert file asset for /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt: stat: stat /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt: no such file or directory
	I1217 20:11:58.206462  489418 ssh_runner.go:195] Run: cat /version.json
	I1217 20:11:58.206506  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:11:58.206776  489418 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:11:58.206833  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:11:58.227023  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:11:58.243795  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:11:58.331107  489418 ssh_runner.go:195] Run: systemctl --version
	I1217 20:11:58.426312  489418 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:11:58.470733  489418 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:11:58.475230  489418 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:11:58.475303  489418 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:11:58.504558  489418 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 20:11:58.504628  489418 start.go:496] detecting cgroup driver to use...
	I1217 20:11:58.504668  489418 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:11:58.504726  489418 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:11:58.523053  489418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:11:58.536489  489418 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:11:58.536555  489418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:11:58.554830  489418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:11:58.573852  489418 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:11:58.696845  489418 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:11:58.813085  489418 docker.go:234] disabling docker service ...
	I1217 20:11:58.813201  489418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:11:58.834375  489418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:11:58.848224  489418 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:11:58.968701  489418 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:11:59.101274  489418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:11:59.114173  489418 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:11:59.128115  489418 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:11:59.128183  489418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:11:59.137733  489418 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:11:59.137802  489418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:11:59.147317  489418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:11:59.156719  489418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:11:59.165251  489418 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:11:59.173231  489418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:11:59.181879  489418 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:11:59.195349  489418 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:11:59.204330  489418 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:11:59.211917  489418 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:11:59.219160  489418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:11:59.330531  489418 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:11:59.514675  489418 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:11:59.514771  489418 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:11:59.518554  489418 start.go:564] Will wait 60s for crictl version
	I1217 20:11:59.518622  489418 ssh_runner.go:195] Run: which crictl
	I1217 20:11:59.522262  489418 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:11:59.554889  489418 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:11:59.555000  489418 ssh_runner.go:195] Run: crio --version
	I1217 20:11:59.584235  489418 ssh_runner.go:195] Run: crio --version
	I1217 20:11:59.617918  489418 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 20:11:59.620791  489418 cli_runner.go:164] Run: docker network inspect addons-052340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:11:59.636829  489418 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:11:59.640673  489418 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:11:59.650436  489418 kubeadm.go:884] updating cluster {Name:addons-052340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-052340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:11:59.650560  489418 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:11:59.650614  489418 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:11:59.699932  489418 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:11:59.699964  489418 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:11:59.700024  489418 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:11:59.724844  489418 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:11:59.724871  489418 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:11:59.724879  489418 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1217 20:11:59.724985  489418 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-052340 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-052340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:11:59.725070  489418 ssh_runner.go:195] Run: crio config
	I1217 20:11:59.777469  489418 cni.go:84] Creating CNI manager for ""
	I1217 20:11:59.777492  489418 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:11:59.777502  489418 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:11:59.777535  489418 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-052340 NodeName:addons-052340 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:11:59.777688  489418 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-052340"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:11:59.777768  489418 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:11:59.785482  489418 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:11:59.785555  489418 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:11:59.793284  489418 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 20:11:59.805897  489418 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:11:59.819492  489418 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1217 20:11:59.832503  489418 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:11:59.836196  489418 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:11:59.846264  489418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:11:59.954341  489418 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:11:59.969581  489418 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340 for IP: 192.168.49.2
	I1217 20:11:59.969600  489418 certs.go:195] generating shared ca certs ...
	I1217 20:11:59.969617  489418 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:11:59.969832  489418 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:12:00.418712  489418 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt ...
	I1217 20:12:00.418761  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt: {Name:mkc7b12a3381fbc450f246bfde676cc2781e84c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:00.419063  489418 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key ...
	I1217 20:12:00.419261  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key: {Name:mk2b0252dea576b037b642bb6b70cd65f4ad3caa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:00.419688  489418 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:12:00.774249  489418 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt ...
	I1217 20:12:00.774286  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt: {Name:mk3281bafadf3317e622593d0a7b922e4a39df91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:00.774470  489418 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key ...
	I1217 20:12:00.774478  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key: {Name:mkab881eb90efdc460b8def7dbaea8828c0e513d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:00.774577  489418 certs.go:257] generating profile certs ...
	I1217 20:12:00.774640  489418 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.key
	I1217 20:12:00.774653  489418 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt with IP's: []
	I1217 20:12:01.147204  489418 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt ...
	I1217 20:12:01.147238  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: {Name:mk31f366572cdb41cb330e01f195ae0036e4e610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:01.147437  489418 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.key ...
	I1217 20:12:01.147450  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.key: {Name:mkf1c8df39135eec2278174f3ef12fb552c66234 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:01.147550  489418 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.key.00ea8426
	I1217 20:12:01.147572  489418 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.crt.00ea8426 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1217 20:12:01.329464  489418 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.crt.00ea8426 ...
	I1217 20:12:01.329499  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.crt.00ea8426: {Name:mk014ba9e983f4a1a64ff112d57da7d7525e6189 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:01.329694  489418 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.key.00ea8426 ...
	I1217 20:12:01.329708  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.key.00ea8426: {Name:mk1ee45456823d68e7c5052c1a87a3d9c89d927f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:01.329791  489418 certs.go:382] copying /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.crt.00ea8426 -> /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.crt
	I1217 20:12:01.329871  489418 certs.go:386] copying /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.key.00ea8426 -> /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.key
	I1217 20:12:01.329926  489418 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/proxy-client.key
	I1217 20:12:01.329942  489418 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/proxy-client.crt with IP's: []
	I1217 20:12:01.463380  489418 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/proxy-client.crt ...
	I1217 20:12:01.463418  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/proxy-client.crt: {Name:mk7e7bdac14a1ae213acf34c87bb2bbde9d67604 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:01.463623  489418 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/proxy-client.key ...
	I1217 20:12:01.463641  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/proxy-client.key: {Name:mk52409309336359a936c0b7a282fe0bba85a85b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:01.463834  489418 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:12:01.463881  489418 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:12:01.463912  489418 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:12:01.463942  489418 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:12:01.464525  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:12:01.486389  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:12:01.505993  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:12:01.524043  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:12:01.542638  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 20:12:01.561179  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:12:01.584245  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:12:01.604887  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:12:01.623759  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:12:01.643808  489418 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:12:01.657093  489418 ssh_runner.go:195] Run: openssl version
	I1217 20:12:01.663755  489418 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:12:01.671652  489418 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:12:01.679525  489418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:12:01.683454  489418 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:12:01.683531  489418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:12:01.725140  489418 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:12:01.732949  489418 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 20:12:01.740674  489418 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:12:01.744480  489418 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 20:12:01.744532  489418 kubeadm.go:401] StartCluster: {Name:addons-052340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-052340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:12:01.744604  489418 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:12:01.744661  489418 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:12:01.775331  489418 cri.go:89] found id: ""
	I1217 20:12:01.775405  489418 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:12:01.783479  489418 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:12:01.791413  489418 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:12:01.791500  489418 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:12:01.799469  489418 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:12:01.799491  489418 kubeadm.go:158] found existing configuration files:
	
	I1217 20:12:01.799546  489418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 20:12:01.807520  489418 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:12:01.807660  489418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:12:01.815372  489418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 20:12:01.823405  489418 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:12:01.823479  489418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:12:01.831306  489418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 20:12:01.839915  489418 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:12:01.840039  489418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:12:01.847907  489418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 20:12:01.856062  489418 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:12:01.856154  489418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:12:01.863712  489418 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:12:01.907549  489418 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 20:12:01.907824  489418 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:12:01.930677  489418 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:12:01.930802  489418 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 20:12:01.930875  489418 kubeadm.go:319] OS: Linux
	I1217 20:12:01.930946  489418 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:12:01.931014  489418 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 20:12:01.931087  489418 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:12:01.931159  489418 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:12:01.931235  489418 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:12:01.931306  489418 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:12:01.931377  489418 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:12:01.931451  489418 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:12:01.931526  489418 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 20:12:02.000828  489418 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:12:02.000997  489418 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:12:02.001099  489418 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:12:02.012485  489418 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:12:02.018302  489418 out.go:252]   - Generating certificates and keys ...
	I1217 20:12:02.018456  489418 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:12:02.018564  489418 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:12:03.039164  489418 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 20:12:03.502741  489418 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 20:12:04.229236  489418 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 20:12:05.338977  489418 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 20:12:05.744696  489418 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 20:12:05.745011  489418 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-052340 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 20:12:06.120704  489418 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 20:12:06.121220  489418 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-052340 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 20:12:06.276842  489418 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 20:12:06.727242  489418 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 20:12:07.588433  489418 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 20:12:07.588899  489418 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:12:09.083503  489418 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:12:09.687572  489418 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:12:09.972998  489418 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:12:10.209902  489418 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:12:10.928010  489418 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:12:10.929165  489418 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:12:10.932103  489418 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 20:12:10.935656  489418 out.go:252]   - Booting up control plane ...
	I1217 20:12:10.935765  489418 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:12:10.935845  489418 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:12:10.936742  489418 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:12:10.952499  489418 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:12:10.952793  489418 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:12:10.960721  489418 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:12:10.961052  489418 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:12:10.961274  489418 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:12:11.085169  489418 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:12:11.085284  489418 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:12:12.086449  489418 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001573378s
	I1217 20:12:12.090060  489418 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 20:12:12.090152  489418 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1217 20:12:12.090236  489418 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 20:12:12.090309  489418 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 20:12:15.458982  489418 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.368338217s
	I1217 20:12:17.408923  489418 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.318821203s
	I1217 20:12:18.093014  489418 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002769609s
	I1217 20:12:18.130532  489418 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 20:12:18.155175  489418 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 20:12:18.179902  489418 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 20:12:18.180116  489418 kubeadm.go:319] [mark-control-plane] Marking the node addons-052340 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 20:12:18.201337  489418 kubeadm.go:319] [bootstrap-token] Using token: o0jkvy.oy99iv7pltt4di17
	I1217 20:12:18.206345  489418 out.go:252]   - Configuring RBAC rules ...
	I1217 20:12:18.206473  489418 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 20:12:18.216302  489418 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 20:12:18.228312  489418 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 20:12:18.233821  489418 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 20:12:18.238719  489418 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 20:12:18.246557  489418 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 20:12:18.510360  489418 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 20:12:18.995131  489418 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 20:12:19.504224  489418 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 20:12:19.504244  489418 kubeadm.go:319] 
	I1217 20:12:19.504305  489418 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 20:12:19.504326  489418 kubeadm.go:319] 
	I1217 20:12:19.504403  489418 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 20:12:19.504407  489418 kubeadm.go:319] 
	I1217 20:12:19.504432  489418 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 20:12:19.504493  489418 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 20:12:19.504543  489418 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 20:12:19.504548  489418 kubeadm.go:319] 
	I1217 20:12:19.504602  489418 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 20:12:19.504622  489418 kubeadm.go:319] 
	I1217 20:12:19.504669  489418 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 20:12:19.504673  489418 kubeadm.go:319] 
	I1217 20:12:19.504730  489418 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 20:12:19.504809  489418 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 20:12:19.504878  489418 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 20:12:19.504882  489418 kubeadm.go:319] 
	I1217 20:12:19.504967  489418 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 20:12:19.505043  489418 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 20:12:19.505047  489418 kubeadm.go:319] 
	I1217 20:12:19.505130  489418 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token o0jkvy.oy99iv7pltt4di17 \
	I1217 20:12:19.505233  489418 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8f40ab2bade0ae5c3450e7595a76f8b890ef62a258572dfbcace94aca819ea89 \
	I1217 20:12:19.505253  489418 kubeadm.go:319] 	--control-plane 
	I1217 20:12:19.505257  489418 kubeadm.go:319] 
	I1217 20:12:19.505351  489418 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 20:12:19.505356  489418 kubeadm.go:319] 
	I1217 20:12:19.505438  489418 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token o0jkvy.oy99iv7pltt4di17 \
	I1217 20:12:19.505540  489418 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8f40ab2bade0ae5c3450e7595a76f8b890ef62a258572dfbcace94aca819ea89 
	I1217 20:12:19.508615  489418 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1217 20:12:19.508834  489418 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 20:12:19.508938  489418 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 20:12:19.508957  489418 cni.go:84] Creating CNI manager for ""
	I1217 20:12:19.508964  489418 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:12:19.512174  489418 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 20:12:19.515064  489418 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 20:12:19.519210  489418 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 20:12:19.519231  489418 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 20:12:19.532786  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 20:12:19.830186  489418 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 20:12:19.830375  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:12:19.830508  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-052340 minikube.k8s.io/updated_at=2025_12_17T20_12_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869 minikube.k8s.io/name=addons-052340 minikube.k8s.io/primary=true
	I1217 20:12:19.987342  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:12:19.987408  489418 ops.go:34] apiserver oom_adj: -16
	I1217 20:12:20.487524  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:12:20.987772  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:12:21.488445  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:12:21.987565  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:12:22.488136  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:12:22.987433  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:12:23.487456  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:12:23.988272  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:12:24.114766  489418 kubeadm.go:1114] duration metric: took 4.2844551s to wait for elevateKubeSystemPrivileges
	I1217 20:12:24.114794  489418 kubeadm.go:403] duration metric: took 22.370265885s to StartCluster
	I1217 20:12:24.114811  489418 settings.go:142] acquiring lock: {Name:mk91fac3a58d91b836cd0701183ef7cc1e571672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:24.114926  489418 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:12:24.115293  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/kubeconfig: {Name:mkc7214551fa855d6d4575d627c3ecd54291b351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:24.115518  489418 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:12:24.115727  489418 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 20:12:24.116015  489418 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:12:24.116059  489418 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1217 20:12:24.116131  489418 addons.go:70] Setting yakd=true in profile "addons-052340"
	I1217 20:12:24.116144  489418 addons.go:239] Setting addon yakd=true in "addons-052340"
	I1217 20:12:24.116168  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.116682  489418 addons.go:70] Setting inspektor-gadget=true in profile "addons-052340"
	I1217 20:12:24.116695  489418 addons.go:239] Setting addon inspektor-gadget=true in "addons-052340"
	I1217 20:12:24.116715  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.117113  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.117545  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.117674  489418 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-052340"
	I1217 20:12:24.117687  489418 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-052340"
	I1217 20:12:24.117709  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.118160  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.119608  489418 addons.go:70] Setting metrics-server=true in profile "addons-052340"
	I1217 20:12:24.119678  489418 addons.go:239] Setting addon metrics-server=true in "addons-052340"
	I1217 20:12:24.119766  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.120278  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.122226  489418 addons.go:70] Setting cloud-spanner=true in profile "addons-052340"
	I1217 20:12:24.122262  489418 addons.go:239] Setting addon cloud-spanner=true in "addons-052340"
	I1217 20:12:24.122302  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.122781  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.131326  489418 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-052340"
	I1217 20:12:24.131404  489418 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-052340"
	I1217 20:12:24.131435  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.131753  489418 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-052340"
	I1217 20:12:24.131775  489418 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-052340"
	I1217 20:12:24.131802  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.131956  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.132218  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.133027  489418 addons.go:70] Setting registry=true in profile "addons-052340"
	I1217 20:12:24.133052  489418 addons.go:239] Setting addon registry=true in "addons-052340"
	I1217 20:12:24.133089  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.133550  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.147521  489418 addons.go:70] Setting default-storageclass=true in profile "addons-052340"
	I1217 20:12:24.147554  489418 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-052340"
	I1217 20:12:24.147944  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.153537  489418 addons.go:70] Setting registry-creds=true in profile "addons-052340"
	I1217 20:12:24.153574  489418 addons.go:239] Setting addon registry-creds=true in "addons-052340"
	I1217 20:12:24.153613  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.154154  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.175560  489418 addons.go:70] Setting storage-provisioner=true in profile "addons-052340"
	I1217 20:12:24.175607  489418 addons.go:239] Setting addon storage-provisioner=true in "addons-052340"
	I1217 20:12:24.175651  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.176301  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.176916  489418 addons.go:70] Setting gcp-auth=true in profile "addons-052340"
	I1217 20:12:24.176945  489418 mustload.go:66] Loading cluster: addons-052340
	I1217 20:12:24.177127  489418 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:12:24.177393  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.207840  489418 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-052340"
	I1217 20:12:24.207883  489418 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-052340"
	I1217 20:12:24.208259  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.218462  489418 addons.go:70] Setting ingress=true in profile "addons-052340"
	I1217 20:12:24.218497  489418 addons.go:239] Setting addon ingress=true in "addons-052340"
	I1217 20:12:24.218550  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.219052  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.237520  489418 addons.go:70] Setting volcano=true in profile "addons-052340"
	I1217 20:12:24.237555  489418 addons.go:239] Setting addon volcano=true in "addons-052340"
	I1217 20:12:24.237597  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.238103  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.242039  489418 out.go:179] * Verifying Kubernetes components...
	I1217 20:12:24.242278  489418 addons.go:70] Setting ingress-dns=true in profile "addons-052340"
	I1217 20:12:24.242315  489418 addons.go:239] Setting addon ingress-dns=true in "addons-052340"
	I1217 20:12:24.242365  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.242939  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.269486  489418 addons.go:70] Setting volumesnapshots=true in profile "addons-052340"
	I1217 20:12:24.269527  489418 addons.go:239] Setting addon volumesnapshots=true in "addons-052340"
	I1217 20:12:24.269563  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.270154  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.377768  489418 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1217 20:12:24.383749  489418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:12:24.432459  489418 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 20:12:24.432545  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1217 20:12:24.432654  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.435688  489418 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1217 20:12:24.450789  489418 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1217 20:12:24.450855  489418 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1217 20:12:24.450953  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.475146  489418 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1217 20:12:24.480746  489418 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1217 20:12:24.480984  489418 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1217 20:12:24.481140  489418 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 20:12:24.481175  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1217 20:12:24.481311  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.487768  489418 addons.go:239] Setting addon default-storageclass=true in "addons-052340"
	I1217 20:12:24.487867  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.488458  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.510912  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.520741  489418 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1217 20:12:24.524254  489418 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1217 20:12:24.524597  489418 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 20:12:24.524658  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1217 20:12:24.524781  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.529229  489418 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1217 20:12:24.529360  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1217 20:12:24.529472  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	W1217 20:12:24.571281  489418 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1217 20:12:24.579858  489418 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1217 20:12:24.579880  489418 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1217 20:12:24.579952  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.580592  489418 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1217 20:12:24.606691  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1217 20:12:24.606774  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.580601  489418 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1217 20:12:24.580605  489418 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1217 20:12:24.588486  489418 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-052340"
	I1217 20:12:24.614009  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.614533  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.626309  489418 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1217 20:12:24.631658  489418 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1217 20:12:24.631695  489418 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1217 20:12:24.631796  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.637183  489418 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1217 20:12:24.640212  489418 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1217 20:12:24.644412  489418 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 20:12:24.652744  489418 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 20:12:24.654427  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.659733  489418 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 20:12:24.659796  489418 out.go:179]   - Using image docker.io/registry:3.0.0
	I1217 20:12:24.680752  489418 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1217 20:12:24.681030  489418 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 20:12:24.688041  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1217 20:12:24.688190  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.688497  489418 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:12:24.688510  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 20:12:24.688563  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.701094  489418 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 20:12:24.701127  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1217 20:12:24.701192  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.681039  489418 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1217 20:12:24.709803  489418 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1217 20:12:24.711667  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.715452  489418 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1217 20:12:24.716549  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.717389  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.718423  489418 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1217 20:12:24.718450  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1217 20:12:24.718526  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.726891  489418 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1217 20:12:24.733590  489418 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1217 20:12:24.737101  489418 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1217 20:12:24.743849  489418 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1217 20:12:24.743880  489418 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1217 20:12:24.743977  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.760660  489418 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:12:24.760686  489418 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:12:24.760756  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.792673  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.830120  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.837789  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.857980  489418 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1217 20:12:24.861246  489418 out.go:179]   - Using image docker.io/busybox:stable
	I1217 20:12:24.864927  489418 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 20:12:24.864952  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1217 20:12:24.865034  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.869142  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.901243  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.914832  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.923573  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.941371  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	W1217 20:12:24.943745  489418 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1217 20:12:24.943795  489418 retry.go:31] will retry after 207.94738ms: ssh: handshake failed: EOF
	I1217 20:12:24.963123  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.966477  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.972867  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:25.158190  489418 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.042427861s)
	I1217 20:12:25.158393  489418 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:12:25.158592  489418 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 20:12:25.535488  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:12:25.628816  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 20:12:25.828941  489418 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1217 20:12:25.828972  489418 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1217 20:12:25.918108  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 20:12:25.950284  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 20:12:25.984086  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:12:26.020764  489418 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1217 20:12:26.020791  489418 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1217 20:12:26.034360  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 20:12:26.065160  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 20:12:26.065839  489418 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1217 20:12:26.065860  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1217 20:12:26.069499  489418 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1217 20:12:26.069518  489418 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1217 20:12:26.109792  489418 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1217 20:12:26.109813  489418 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1217 20:12:26.123513  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1217 20:12:26.225000  489418 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1217 20:12:26.225071  489418 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1217 20:12:26.230171  489418 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1217 20:12:26.230255  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1217 20:12:26.236936  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1217 20:12:26.320631  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 20:12:26.378425  489418 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1217 20:12:26.378505  489418 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1217 20:12:26.383158  489418 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1217 20:12:26.383178  489418 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1217 20:12:26.433406  489418 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1217 20:12:26.433434  489418 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1217 20:12:26.487684  489418 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1217 20:12:26.487706  489418 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1217 20:12:26.535162  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1217 20:12:26.632456  489418 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 20:12:26.632532  489418 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1217 20:12:26.698955  489418 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1217 20:12:26.699036  489418 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1217 20:12:26.758038  489418 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1217 20:12:26.758122  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1217 20:12:26.786227  489418 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1217 20:12:26.786313  489418 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1217 20:12:26.959034  489418 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1217 20:12:26.959105  489418 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1217 20:12:27.024763  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 20:12:27.146438  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1217 20:12:27.179071  489418 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1217 20:12:27.179169  489418 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1217 20:12:27.407739  489418 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 20:12:27.407769  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1217 20:12:27.475978  489418 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1217 20:12:27.476015  489418 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1217 20:12:27.769271  489418 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1217 20:12:27.769356  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1217 20:12:27.809971  489418 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.651490952s)
	I1217 20:12:27.810054  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.274533471s)
	I1217 20:12:27.810291  489418 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.651660562s)
	I1217 20:12:27.810313  489418 start.go:1013] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1217 20:12:27.811020  489418 node_ready.go:35] waiting up to 6m0s for node "addons-052340" to be "Ready" ...
	I1217 20:12:27.914010  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 20:12:28.154472  489418 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1217 20:12:28.154549  489418 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1217 20:12:28.286028  489418 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1217 20:12:28.286104  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1217 20:12:28.316547  489418 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-052340" context rescaled to 1 replicas
	I1217 20:12:28.394547  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.765693593s)
	I1217 20:12:28.539812  489418 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1217 20:12:28.539912  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1217 20:12:28.734483  489418 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 20:12:28.734565  489418 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1217 20:12:28.890316  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1217 20:12:29.825641  489418 node_ready.go:57] node "addons-052340" has "Ready":"False" status (will retry)
	I1217 20:12:31.832551  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.914404802s)
	I1217 20:12:31.832639  489418 addons.go:495] Verifying addon ingress=true in "addons-052340"
	I1217 20:12:31.832991  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.798604599s)
	I1217 20:12:31.833063  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.767881451s)
	I1217 20:12:31.832833  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.848722288s)
	I1217 20:12:31.832673  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.882363583s)
	I1217 20:12:31.833226  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.709536985s)
	I1217 20:12:31.833299  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.596266496s)
	I1217 20:12:31.833385  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.512676102s)
	I1217 20:12:31.833462  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.298231003s)
	I1217 20:12:31.833481  489418 addons.go:495] Verifying addon registry=true in "addons-052340"
	I1217 20:12:31.833955  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.809112804s)
	I1217 20:12:31.833974  489418 addons.go:495] Verifying addon metrics-server=true in "addons-052340"
	I1217 20:12:31.834012  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.687487717s)
	I1217 20:12:31.834178  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.920088255s)
	W1217 20:12:31.834379  489418 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 20:12:31.834400  489418 retry.go:31] will retry after 296.854856ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 20:12:31.836380  489418 out.go:179] * Verifying registry addon...
	I1217 20:12:31.838467  489418 out.go:179] * Verifying ingress addon...
	I1217 20:12:31.840356  489418 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-052340 service yakd-dashboard -n yakd-dashboard
	
	I1217 20:12:31.842911  489418 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1217 20:12:31.842985  489418 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1217 20:12:31.849910  489418 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 20:12:31.849940  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:31.850347  489418 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1217 20:12:31.850369  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:32.131833  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 20:12:32.140524  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.250102119s)
	I1217 20:12:32.140570  489418 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-052340"
	I1217 20:12:32.143414  489418 out.go:179] * Verifying csi-hostpath-driver addon...
	I1217 20:12:32.146863  489418 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1217 20:12:32.173288  489418 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 20:12:32.173312  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:32.186326  489418 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1217 20:12:32.186423  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:32.213786  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	W1217 20:12:32.314209  489418 node_ready.go:57] node "addons-052340" has "Ready":"False" status (will retry)
	I1217 20:12:32.349144  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:32.349362  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:32.357941  489418 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1217 20:12:32.371421  489418 addons.go:239] Setting addon gcp-auth=true in "addons-052340"
	I1217 20:12:32.371482  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:32.372016  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:32.392763  489418 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1217 20:12:32.392845  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:32.417306  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:32.650598  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:32.847050  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:32.847643  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:33.151175  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:33.346152  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:33.346329  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:33.650291  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:33.846022  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:33.846261  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:34.150766  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1217 20:12:34.314684  489418 node_ready.go:57] node "addons-052340" has "Ready":"False" status (will retry)
	I1217 20:12:34.347269  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:34.347306  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:34.651208  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:34.850841  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:34.851009  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:34.883964  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.752089686s)
	I1217 20:12:34.884043  489418 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.491240083s)
	I1217 20:12:34.887382  489418 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 20:12:34.890482  489418 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1217 20:12:34.893357  489418 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1217 20:12:34.893385  489418 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1217 20:12:34.907921  489418 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1217 20:12:34.907945  489418 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1217 20:12:34.922197  489418 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 20:12:34.922253  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1217 20:12:34.936840  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 20:12:35.150996  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:35.350048  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:35.350711  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:35.432285  489418 addons.go:495] Verifying addon gcp-auth=true in "addons-052340"
	I1217 20:12:35.435809  489418 out.go:179] * Verifying gcp-auth addon...
	I1217 20:12:35.439489  489418 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1217 20:12:35.447621  489418 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1217 20:12:35.447696  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:35.650352  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:35.847322  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:35.847388  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:35.943389  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:36.151079  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:36.346613  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:36.346771  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:36.443427  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:36.650382  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1217 20:12:36.814586  489418 node_ready.go:57] node "addons-052340" has "Ready":"False" status (will retry)
	I1217 20:12:36.847104  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:36.847350  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:36.943506  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:37.150798  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:37.347519  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:37.347723  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:37.442873  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:37.649902  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:37.846226  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:37.846726  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:37.942745  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:38.151722  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:38.346766  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:38.347231  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:38.443328  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:38.678283  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:38.814852  489418 node_ready.go:49] node "addons-052340" is "Ready"
	I1217 20:12:38.814893  489418 node_ready.go:38] duration metric: took 11.003668139s for node "addons-052340" to be "Ready" ...
	I1217 20:12:38.814908  489418 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:12:38.814976  489418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:12:38.838679  489418 api_server.go:72] duration metric: took 14.723132287s to wait for apiserver process to appear ...
	I1217 20:12:38.838756  489418 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:12:38.838790  489418 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:12:38.857182  489418 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1217 20:12:38.889377  489418 api_server.go:141] control plane version: v1.34.3
	I1217 20:12:38.889461  489418 api_server.go:131] duration metric: took 50.683327ms to wait for apiserver health ...
	I1217 20:12:38.889486  489418 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:12:39.026419  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:39.026899  489418 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 20:12:39.026961  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:39.029015  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:39.030006  489418 system_pods.go:59] 19 kube-system pods found
	I1217 20:12:39.030080  489418 system_pods.go:61] "coredns-66bc5c9577-gnsjt" [6b5581fa-9c20-4809-acec-fdb941b96e8b] Pending
	I1217 20:12:39.030106  489418 system_pods.go:61] "csi-hostpath-attacher-0" [cffdfcc3-5be6-49bd-bbf0-f55a8fd74835] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 20:12:39.030147  489418 system_pods.go:61] "csi-hostpath-resizer-0" [0f0ab7ea-fb94-4093-ace4-699ac63bc501] Pending
	I1217 20:12:39.030173  489418 system_pods.go:61] "csi-hostpathplugin-r5tvz" [5fad484d-ec05-403c-98fe-d17423ad7823] Pending
	I1217 20:12:39.030193  489418 system_pods.go:61] "etcd-addons-052340" [811b19da-296f-4111-9c99-99e6e1747656] Running
	I1217 20:12:39.030213  489418 system_pods.go:61] "kindnet-sk69j" [0fd139f6-2369-4190-8538-13b4246cd1be] Running
	I1217 20:12:39.030233  489418 system_pods.go:61] "kube-apiserver-addons-052340" [84f7271a-33ae-4ce7-9678-e689706c3875] Running
	I1217 20:12:39.030262  489418 system_pods.go:61] "kube-controller-manager-addons-052340" [8fce3e2e-9d80-4002-962b-849b820175b1] Running
	I1217 20:12:39.030287  489418 system_pods.go:61] "kube-ingress-dns-minikube" [dd873af3-c7ba-4c3e-966a-558145fdd163] Pending
	I1217 20:12:39.030307  489418 system_pods.go:61] "kube-proxy-k6bpd" [caf1f177-b1c9-47ca-a5bf-6dca0b3cb333] Running
	I1217 20:12:39.030328  489418 system_pods.go:61] "kube-scheduler-addons-052340" [1a342705-94c6-49ff-9b2c-03f3bd0de227] Running
	I1217 20:12:39.030350  489418 system_pods.go:61] "metrics-server-85b7d694d7-5g267" [5948d9f5-4c59-4b54-9f3e-7fe6fe4859c0] Pending
	I1217 20:12:39.030385  489418 system_pods.go:61] "nvidia-device-plugin-daemonset-b7cpw" [f3e2f332-3129-49a8-8cb2-5dedaf7c64e2] Pending
	I1217 20:12:39.030403  489418 system_pods.go:61] "registry-6b586f9694-h2xmf" [2534706e-f0bb-4990-85ac-495c0ace51cc] Pending
	I1217 20:12:39.030423  489418 system_pods.go:61] "registry-creds-764b6fb674-4s27d" [dc923fe5-3d2f-4a8f-b089-bf8bf7d8040a] Pending
	I1217 20:12:39.030446  489418 system_pods.go:61] "registry-proxy-5q5m2" [ae7a6b86-2960-405a-9d77-ff957fe9411a] Pending
	I1217 20:12:39.030477  489418 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4pf8h" [cfb30b47-2503-495d-b90c-9ae6471c5748] Pending
	I1217 20:12:39.030501  489418 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7528t" [430605c6-5bc2-4f95-8c7c-d69c2da0556f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 20:12:39.030527  489418 system_pods.go:61] "storage-provisioner" [c2c69dbe-6663-429e-8e32-55d14709cd6e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:12:39.030563  489418 system_pods.go:74] duration metric: took 141.057039ms to wait for pod list to return data ...
	I1217 20:12:39.030591  489418 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:12:39.096187  489418 default_sa.go:45] found service account: "default"
	I1217 20:12:39.096258  489418 default_sa.go:55] duration metric: took 65.645572ms for default service account to be created ...
	I1217 20:12:39.096305  489418 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 20:12:39.125460  489418 system_pods.go:86] 19 kube-system pods found
	I1217 20:12:39.131890  489418 system_pods.go:89] "coredns-66bc5c9577-gnsjt" [6b5581fa-9c20-4809-acec-fdb941b96e8b] Pending
	I1217 20:12:39.132339  489418 system_pods.go:89] "csi-hostpath-attacher-0" [cffdfcc3-5be6-49bd-bbf0-f55a8fd74835] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 20:12:39.132354  489418 system_pods.go:89] "csi-hostpath-resizer-0" [0f0ab7ea-fb94-4093-ace4-699ac63bc501] Pending
	I1217 20:12:39.132362  489418 system_pods.go:89] "csi-hostpathplugin-r5tvz" [5fad484d-ec05-403c-98fe-d17423ad7823] Pending
	I1217 20:12:39.132366  489418 system_pods.go:89] "etcd-addons-052340" [811b19da-296f-4111-9c99-99e6e1747656] Running
	I1217 20:12:39.132371  489418 system_pods.go:89] "kindnet-sk69j" [0fd139f6-2369-4190-8538-13b4246cd1be] Running
	I1217 20:12:39.132376  489418 system_pods.go:89] "kube-apiserver-addons-052340" [84f7271a-33ae-4ce7-9678-e689706c3875] Running
	I1217 20:12:39.132382  489418 system_pods.go:89] "kube-controller-manager-addons-052340" [8fce3e2e-9d80-4002-962b-849b820175b1] Running
	I1217 20:12:39.132387  489418 system_pods.go:89] "kube-ingress-dns-minikube" [dd873af3-c7ba-4c3e-966a-558145fdd163] Pending
	I1217 20:12:39.132391  489418 system_pods.go:89] "kube-proxy-k6bpd" [caf1f177-b1c9-47ca-a5bf-6dca0b3cb333] Running
	I1217 20:12:39.132395  489418 system_pods.go:89] "kube-scheduler-addons-052340" [1a342705-94c6-49ff-9b2c-03f3bd0de227] Running
	I1217 20:12:39.132400  489418 system_pods.go:89] "metrics-server-85b7d694d7-5g267" [5948d9f5-4c59-4b54-9f3e-7fe6fe4859c0] Pending
	I1217 20:12:39.132404  489418 system_pods.go:89] "nvidia-device-plugin-daemonset-b7cpw" [f3e2f332-3129-49a8-8cb2-5dedaf7c64e2] Pending
	I1217 20:12:39.132408  489418 system_pods.go:89] "registry-6b586f9694-h2xmf" [2534706e-f0bb-4990-85ac-495c0ace51cc] Pending
	I1217 20:12:39.132412  489418 system_pods.go:89] "registry-creds-764b6fb674-4s27d" [dc923fe5-3d2f-4a8f-b089-bf8bf7d8040a] Pending
	I1217 20:12:39.132416  489418 system_pods.go:89] "registry-proxy-5q5m2" [ae7a6b86-2960-405a-9d77-ff957fe9411a] Pending
	I1217 20:12:39.132420  489418 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4pf8h" [cfb30b47-2503-495d-b90c-9ae6471c5748] Pending
	I1217 20:12:39.132427  489418 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7528t" [430605c6-5bc2-4f95-8c7c-d69c2da0556f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 20:12:39.132433  489418 system_pods.go:89] "storage-provisioner" [c2c69dbe-6663-429e-8e32-55d14709cd6e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:12:39.132450  489418 retry.go:31] will retry after 275.720911ms: missing components: kube-dns
	I1217 20:12:39.190405  489418 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 20:12:39.190503  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:39.366234  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:39.369177  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:39.419193  489418 system_pods.go:86] 19 kube-system pods found
	I1217 20:12:39.419286  489418 system_pods.go:89] "coredns-66bc5c9577-gnsjt" [6b5581fa-9c20-4809-acec-fdb941b96e8b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:12:39.419313  489418 system_pods.go:89] "csi-hostpath-attacher-0" [cffdfcc3-5be6-49bd-bbf0-f55a8fd74835] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 20:12:39.419352  489418 system_pods.go:89] "csi-hostpath-resizer-0" [0f0ab7ea-fb94-4093-ace4-699ac63bc501] Pending
	I1217 20:12:39.419380  489418 system_pods.go:89] "csi-hostpathplugin-r5tvz" [5fad484d-ec05-403c-98fe-d17423ad7823] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 20:12:39.419401  489418 system_pods.go:89] "etcd-addons-052340" [811b19da-296f-4111-9c99-99e6e1747656] Running
	I1217 20:12:39.419423  489418 system_pods.go:89] "kindnet-sk69j" [0fd139f6-2369-4190-8538-13b4246cd1be] Running
	I1217 20:12:39.419455  489418 system_pods.go:89] "kube-apiserver-addons-052340" [84f7271a-33ae-4ce7-9678-e689706c3875] Running
	I1217 20:12:39.419476  489418 system_pods.go:89] "kube-controller-manager-addons-052340" [8fce3e2e-9d80-4002-962b-849b820175b1] Running
	I1217 20:12:39.419496  489418 system_pods.go:89] "kube-ingress-dns-minikube" [dd873af3-c7ba-4c3e-966a-558145fdd163] Pending
	I1217 20:12:39.419515  489418 system_pods.go:89] "kube-proxy-k6bpd" [caf1f177-b1c9-47ca-a5bf-6dca0b3cb333] Running
	I1217 20:12:39.419536  489418 system_pods.go:89] "kube-scheduler-addons-052340" [1a342705-94c6-49ff-9b2c-03f3bd0de227] Running
	I1217 20:12:39.419565  489418 system_pods.go:89] "metrics-server-85b7d694d7-5g267" [5948d9f5-4c59-4b54-9f3e-7fe6fe4859c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 20:12:39.419710  489418 system_pods.go:89] "nvidia-device-plugin-daemonset-b7cpw" [f3e2f332-3129-49a8-8cb2-5dedaf7c64e2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 20:12:39.419737  489418 system_pods.go:89] "registry-6b586f9694-h2xmf" [2534706e-f0bb-4990-85ac-495c0ace51cc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 20:12:39.419759  489418 system_pods.go:89] "registry-creds-764b6fb674-4s27d" [dc923fe5-3d2f-4a8f-b089-bf8bf7d8040a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 20:12:39.419793  489418 system_pods.go:89] "registry-proxy-5q5m2" [ae7a6b86-2960-405a-9d77-ff957fe9411a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 20:12:39.419823  489418 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4pf8h" [cfb30b47-2503-495d-b90c-9ae6471c5748] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 20:12:39.419847  489418 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7528t" [430605c6-5bc2-4f95-8c7c-d69c2da0556f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 20:12:39.419872  489418 system_pods.go:89] "storage-provisioner" [c2c69dbe-6663-429e-8e32-55d14709cd6e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:12:39.419917  489418 retry.go:31] will retry after 389.121722ms: missing components: kube-dns
	I1217 20:12:39.447139  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:39.651399  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:39.840754  489418 system_pods.go:86] 19 kube-system pods found
	I1217 20:12:39.840848  489418 system_pods.go:89] "coredns-66bc5c9577-gnsjt" [6b5581fa-9c20-4809-acec-fdb941b96e8b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:12:39.840877  489418 system_pods.go:89] "csi-hostpath-attacher-0" [cffdfcc3-5be6-49bd-bbf0-f55a8fd74835] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 20:12:39.840917  489418 system_pods.go:89] "csi-hostpath-resizer-0" [0f0ab7ea-fb94-4093-ace4-699ac63bc501] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 20:12:39.840948  489418 system_pods.go:89] "csi-hostpathplugin-r5tvz" [5fad484d-ec05-403c-98fe-d17423ad7823] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 20:12:39.840970  489418 system_pods.go:89] "etcd-addons-052340" [811b19da-296f-4111-9c99-99e6e1747656] Running
	I1217 20:12:39.840992  489418 system_pods.go:89] "kindnet-sk69j" [0fd139f6-2369-4190-8538-13b4246cd1be] Running
	I1217 20:12:39.841028  489418 system_pods.go:89] "kube-apiserver-addons-052340" [84f7271a-33ae-4ce7-9678-e689706c3875] Running
	I1217 20:12:39.841053  489418 system_pods.go:89] "kube-controller-manager-addons-052340" [8fce3e2e-9d80-4002-962b-849b820175b1] Running
	I1217 20:12:39.841077  489418 system_pods.go:89] "kube-ingress-dns-minikube" [dd873af3-c7ba-4c3e-966a-558145fdd163] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 20:12:39.841098  489418 system_pods.go:89] "kube-proxy-k6bpd" [caf1f177-b1c9-47ca-a5bf-6dca0b3cb333] Running
	I1217 20:12:39.841133  489418 system_pods.go:89] "kube-scheduler-addons-052340" [1a342705-94c6-49ff-9b2c-03f3bd0de227] Running
	I1217 20:12:39.841157  489418 system_pods.go:89] "metrics-server-85b7d694d7-5g267" [5948d9f5-4c59-4b54-9f3e-7fe6fe4859c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 20:12:39.841176  489418 system_pods.go:89] "nvidia-device-plugin-daemonset-b7cpw" [f3e2f332-3129-49a8-8cb2-5dedaf7c64e2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 20:12:39.841198  489418 system_pods.go:89] "registry-6b586f9694-h2xmf" [2534706e-f0bb-4990-85ac-495c0ace51cc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 20:12:39.841243  489418 system_pods.go:89] "registry-creds-764b6fb674-4s27d" [dc923fe5-3d2f-4a8f-b089-bf8bf7d8040a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 20:12:39.841268  489418 system_pods.go:89] "registry-proxy-5q5m2" [ae7a6b86-2960-405a-9d77-ff957fe9411a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 20:12:39.841290  489418 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4pf8h" [cfb30b47-2503-495d-b90c-9ae6471c5748] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 20:12:39.841313  489418 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7528t" [430605c6-5bc2-4f95-8c7c-d69c2da0556f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 20:12:39.841355  489418 system_pods.go:89] "storage-provisioner" [c2c69dbe-6663-429e-8e32-55d14709cd6e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:12:39.841392  489418 retry.go:31] will retry after 474.900694ms: missing components: kube-dns
	I1217 20:12:39.933174  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:39.933792  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:39.944811  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:40.150602  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:40.335965  489418 system_pods.go:86] 19 kube-system pods found
	I1217 20:12:40.336043  489418 system_pods.go:89] "coredns-66bc5c9577-gnsjt" [6b5581fa-9c20-4809-acec-fdb941b96e8b] Running
	I1217 20:12:40.336069  489418 system_pods.go:89] "csi-hostpath-attacher-0" [cffdfcc3-5be6-49bd-bbf0-f55a8fd74835] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 20:12:40.336089  489418 system_pods.go:89] "csi-hostpath-resizer-0" [0f0ab7ea-fb94-4093-ace4-699ac63bc501] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 20:12:40.336132  489418 system_pods.go:89] "csi-hostpathplugin-r5tvz" [5fad484d-ec05-403c-98fe-d17423ad7823] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 20:12:40.336161  489418 system_pods.go:89] "etcd-addons-052340" [811b19da-296f-4111-9c99-99e6e1747656] Running
	I1217 20:12:40.336181  489418 system_pods.go:89] "kindnet-sk69j" [0fd139f6-2369-4190-8538-13b4246cd1be] Running
	I1217 20:12:40.336200  489418 system_pods.go:89] "kube-apiserver-addons-052340" [84f7271a-33ae-4ce7-9678-e689706c3875] Running
	I1217 20:12:40.336219  489418 system_pods.go:89] "kube-controller-manager-addons-052340" [8fce3e2e-9d80-4002-962b-849b820175b1] Running
	I1217 20:12:40.336250  489418 system_pods.go:89] "kube-ingress-dns-minikube" [dd873af3-c7ba-4c3e-966a-558145fdd163] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 20:12:40.336274  489418 system_pods.go:89] "kube-proxy-k6bpd" [caf1f177-b1c9-47ca-a5bf-6dca0b3cb333] Running
	I1217 20:12:40.336295  489418 system_pods.go:89] "kube-scheduler-addons-052340" [1a342705-94c6-49ff-9b2c-03f3bd0de227] Running
	I1217 20:12:40.336317  489418 system_pods.go:89] "metrics-server-85b7d694d7-5g267" [5948d9f5-4c59-4b54-9f3e-7fe6fe4859c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 20:12:40.336349  489418 system_pods.go:89] "nvidia-device-plugin-daemonset-b7cpw" [f3e2f332-3129-49a8-8cb2-5dedaf7c64e2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 20:12:40.336380  489418 system_pods.go:89] "registry-6b586f9694-h2xmf" [2534706e-f0bb-4990-85ac-495c0ace51cc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 20:12:40.336400  489418 system_pods.go:89] "registry-creds-764b6fb674-4s27d" [dc923fe5-3d2f-4a8f-b089-bf8bf7d8040a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 20:12:40.336422  489418 system_pods.go:89] "registry-proxy-5q5m2" [ae7a6b86-2960-405a-9d77-ff957fe9411a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 20:12:40.336454  489418 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4pf8h" [cfb30b47-2503-495d-b90c-9ae6471c5748] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 20:12:40.336481  489418 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7528t" [430605c6-5bc2-4f95-8c7c-d69c2da0556f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 20:12:40.336501  489418 system_pods.go:89] "storage-provisioner" [c2c69dbe-6663-429e-8e32-55d14709cd6e] Running
	I1217 20:12:40.336527  489418 system_pods.go:126] duration metric: took 1.240188101s to wait for k8s-apps to be running ...
	I1217 20:12:40.336560  489418 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 20:12:40.336640  489418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:12:40.364834  489418 system_svc.go:56] duration metric: took 28.265502ms WaitForService to wait for kubelet
	I1217 20:12:40.364909  489418 kubeadm.go:587] duration metric: took 16.249366709s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:12:40.364947  489418 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:12:40.368168  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:40.368510  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:40.377143  489418 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:12:40.377239  489418 node_conditions.go:123] node cpu capacity is 2
	I1217 20:12:40.377269  489418 node_conditions.go:105] duration metric: took 12.300157ms to run NodePressure ...
	I1217 20:12:40.377308  489418 start.go:242] waiting for startup goroutines ...
	I1217 20:12:40.461000  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:40.650540  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:40.848172  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:40.849326  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:40.943393  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:41.158856  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:41.346928  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:41.348343  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:41.442897  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:41.650625  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:41.848429  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:41.848594  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:41.943312  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:42.152567  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:42.349007  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:42.349486  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:42.448896  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:42.649978  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:42.847395  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:42.847659  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:42.942965  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:43.151060  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:43.347038  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:43.347706  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:43.442687  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:43.650666  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:43.849744  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:43.850179  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:43.944282  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:44.151077  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:44.347958  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:44.348284  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:44.443419  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:44.651491  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:44.848034  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:44.848164  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:44.943099  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:45.151753  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:45.352604  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:45.353108  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:45.443702  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:45.651463  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:45.847104  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:45.847320  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:45.944106  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:46.151478  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:46.349257  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:46.349595  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:46.450369  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:46.650952  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:46.848493  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:46.848852  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:46.942788  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:47.152270  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:47.349840  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:47.349995  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:47.448649  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:47.650304  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:47.848118  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:47.848541  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:47.943325  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:48.151049  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:48.350209  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:48.351574  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:48.442953  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:48.650299  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:48.849064  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:48.851187  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:48.943518  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:49.151055  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:49.349246  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:49.354342  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:49.443188  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:49.650851  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:49.848219  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:49.848779  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:49.942829  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:50.150738  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:50.347825  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:50.348961  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:50.443340  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:50.650900  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:50.846888  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:50.847737  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:50.942896  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:51.150805  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:51.357199  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:51.357352  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:51.457215  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:51.655810  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:51.849524  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:51.850420  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:51.947195  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:52.151951  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:52.347556  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:52.348343  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:52.443915  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:52.651073  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:52.847888  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:52.848110  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:52.943192  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:53.151023  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:53.352415  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:53.352757  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:53.450437  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:53.652735  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:53.847619  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:53.847985  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:53.954213  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:54.150850  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:54.347742  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:54.347870  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:54.445160  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:54.652126  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:54.849334  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:54.850020  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:54.944190  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:55.150999  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:55.349544  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:55.349931  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:55.443634  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:55.652133  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:55.848378  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:55.849716  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:55.942686  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:56.151323  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:56.349225  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:56.350723  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:56.443250  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:56.651396  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:56.847157  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:56.847264  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:56.943108  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:57.150187  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:57.347540  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:57.347865  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:57.443527  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:57.650783  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:57.847319  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:57.847776  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:57.942974  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:58.151130  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:58.347825  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:58.348013  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:58.448655  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:58.651210  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:58.848036  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:58.848216  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:58.943144  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:59.150719  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:59.347089  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:59.347539  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:59.442889  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:59.650603  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:59.848493  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:59.848974  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:59.943039  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:00.222070  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:00.354138  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:00.354834  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:00.443965  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:00.650403  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:00.848112  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:00.848525  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:00.942715  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:01.150843  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:01.348136  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:01.348358  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:01.442783  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:01.650453  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:01.848037  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:01.848237  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:01.943523  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:02.154095  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:02.350534  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:02.351933  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:02.449664  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:02.651882  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:02.848603  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:02.849060  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:02.943389  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:03.151633  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:03.348505  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:03.348923  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:03.443358  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:03.651317  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:03.857031  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:03.858011  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:03.955251  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:04.150978  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:04.350235  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:04.350572  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:04.442898  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:04.650704  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:04.848624  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:04.849011  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:04.943199  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:05.150690  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:05.348123  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:05.348259  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:05.443382  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:05.651331  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:05.847401  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:05.847816  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:05.942754  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:06.151152  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:06.347341  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:06.347441  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:06.447883  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:06.650764  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:06.846878  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:06.847047  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:06.943780  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:07.150712  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:07.347437  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:07.347666  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:07.442715  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:07.650739  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:07.846281  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:07.847610  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:07.942627  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:08.151768  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:08.346349  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:08.346636  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:08.442740  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:08.651871  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:08.847218  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:08.847435  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:08.943172  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:09.150519  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:09.347900  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:09.348030  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:09.443314  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:09.651164  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:09.847881  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:09.848150  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:09.943209  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:10.151144  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:10.348909  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:10.349356  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:10.443211  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:10.651759  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:10.849338  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:10.849839  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:10.943053  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:11.150832  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:11.346722  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:11.346872  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:11.443335  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:11.651076  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:11.846557  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:11.847017  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:11.943109  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:12.150523  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:12.347339  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:12.348015  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:12.443161  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:12.650957  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:12.846503  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:12.846733  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:12.942698  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:13.150558  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:13.347067  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:13.347116  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:13.442912  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:13.650398  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:13.847084  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:13.847093  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:13.944215  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:14.150900  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:14.346477  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:14.346619  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:14.442585  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:14.651121  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:14.846579  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:14.847024  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:14.942730  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:15.150313  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:15.346354  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:15.346491  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:15.442431  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:15.650516  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:15.847104  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:15.847746  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:15.942563  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:16.151102  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:16.347079  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:16.347433  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:16.442488  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:16.652776  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:16.846634  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:16.846852  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:16.942649  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:17.151033  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:17.346629  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:17.346838  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:17.442765  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:17.651054  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:17.847567  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:17.848937  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:17.942741  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:18.150934  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:18.346228  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:18.346526  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:18.443284  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:18.651058  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:18.846461  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:18.846491  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:18.942615  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:19.150721  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:19.346224  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:19.346435  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:19.442531  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:19.650942  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:19.847278  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:19.847520  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:19.942363  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:20.151514  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:20.347271  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:20.347410  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:20.443230  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:20.650793  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:20.846430  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:20.846582  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:20.942736  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:21.151456  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:21.347419  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:21.347616  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:21.442875  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:21.650614  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:21.847112  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:21.847298  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:21.943260  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:22.150619  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:22.347673  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:22.347855  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:22.442975  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:22.650944  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:22.846621  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:22.846933  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:22.942704  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:23.151282  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:23.346528  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:23.346733  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:23.443096  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:23.650494  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:23.847406  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:23.847615  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:23.943124  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:24.150391  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:24.347094  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:24.347256  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:24.443156  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:24.651099  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:24.846837  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:24.846996  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:24.942832  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:25.150839  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:25.348079  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:25.348359  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:25.443351  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:25.651384  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:25.847285  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:25.847509  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:25.943278  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:26.151027  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:26.346767  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:26.347269  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:26.443391  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:26.650628  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:26.847785  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:26.848219  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:26.942331  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:27.151341  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:27.347323  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:27.347543  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:27.443241  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:27.650753  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:27.847907  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:27.848068  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:27.943064  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:28.150892  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:28.347770  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:28.348651  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:28.443054  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:28.651578  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:28.851019  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:28.851205  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:28.949271  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:29.151392  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:29.347209  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:29.347733  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:29.442896  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:29.650637  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:29.848378  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:29.848991  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:29.943064  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:30.151279  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:30.347681  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:30.347831  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:30.447974  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:30.650764  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:30.847310  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:30.848139  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:30.943001  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:31.150808  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:31.348727  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:31.349205  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:31.443499  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:31.651223  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:31.846838  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:31.846931  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:31.943000  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:32.150414  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:32.347510  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:32.347914  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:32.442620  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:32.651490  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:32.846746  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:32.846966  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:32.942816  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:33.149908  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:33.347128  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:33.347361  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:33.443207  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:33.651348  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:33.848630  489418 kapi.go:107] duration metric: took 1m2.005717766s to wait for kubernetes.io/minikube-addons=registry ...
	I1217 20:13:33.848813  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:33.943421  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:34.150994  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:34.346560  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:34.442742  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:34.651975  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:34.846473  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:34.942861  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:35.150781  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:35.346195  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:35.443488  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:35.651302  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:35.846310  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:35.951977  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:36.150796  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:36.347210  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:36.442643  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:36.652404  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:36.846697  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:36.943783  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:37.151614  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:37.347809  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:37.443113  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:37.659455  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:37.849700  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:37.942961  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:38.155742  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:38.347133  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:38.443903  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:38.658017  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:38.846813  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:38.947366  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:39.151990  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:39.346454  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:39.443067  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:39.652567  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:39.846995  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:39.943835  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:40.151127  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:40.354359  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:40.444680  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:40.658239  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:40.847185  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:40.947749  489418 kapi.go:107] duration metric: took 1m5.508261299s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1217 20:13:40.951845  489418 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-052340 cluster.
	I1217 20:13:40.955070  489418 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1217 20:13:40.958375  489418 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1217 20:13:41.151700  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:41.346866  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:41.650650  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:41.847761  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:42.153538  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:42.349847  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:42.652236  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:42.849641  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:43.151993  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:43.346026  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:43.651003  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:43.846380  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:44.150640  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:44.346617  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:44.650313  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:44.846535  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:45.151909  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:45.346522  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:45.651228  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:45.847022  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:46.150515  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:46.347690  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:46.651437  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:46.847403  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:47.151567  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:47.347043  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:47.650971  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:47.846766  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:48.152379  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:48.347114  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:48.650622  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:48.847045  489418 kapi.go:107] duration metric: took 1m17.004054713s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1217 20:13:49.150499  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:49.654495  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:50.155612  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:50.651227  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:51.151864  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:51.655238  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:52.151195  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:52.650223  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:53.151035  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:53.650870  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:54.151189  489418 kapi.go:107] duration metric: took 1m22.004325474s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1217 20:13:54.154233  489418 out.go:179] * Enabled addons: default-storageclass, nvidia-device-plugin, registry-creds, amd-gpu-device-plugin, storage-provisioner, ingress-dns, inspektor-gadget, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1217 20:13:54.156998  489418 addons.go:530] duration metric: took 1m30.040934347s for enable addons: enabled=[default-storageclass nvidia-device-plugin registry-creds amd-gpu-device-plugin storage-provisioner ingress-dns inspektor-gadget cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1217 20:13:54.157057  489418 start.go:247] waiting for cluster config update ...
	I1217 20:13:54.157083  489418 start.go:256] writing updated cluster config ...
	I1217 20:13:54.157410  489418 ssh_runner.go:195] Run: rm -f paused
	I1217 20:13:54.163774  489418 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:13:54.167399  489418 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gnsjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:54.173160  489418 pod_ready.go:94] pod "coredns-66bc5c9577-gnsjt" is "Ready"
	I1217 20:13:54.173187  489418 pod_ready.go:86] duration metric: took 5.758866ms for pod "coredns-66bc5c9577-gnsjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:54.175526  489418 pod_ready.go:83] waiting for pod "etcd-addons-052340" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:54.180596  489418 pod_ready.go:94] pod "etcd-addons-052340" is "Ready"
	I1217 20:13:54.180624  489418 pod_ready.go:86] duration metric: took 4.977182ms for pod "etcd-addons-052340" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:54.182921  489418 pod_ready.go:83] waiting for pod "kube-apiserver-addons-052340" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:54.187562  489418 pod_ready.go:94] pod "kube-apiserver-addons-052340" is "Ready"
	I1217 20:13:54.187612  489418 pod_ready.go:86] duration metric: took 4.661382ms for pod "kube-apiserver-addons-052340" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:54.189979  489418 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-052340" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:54.567622  489418 pod_ready.go:94] pod "kube-controller-manager-addons-052340" is "Ready"
	I1217 20:13:54.567652  489418 pod_ready.go:86] duration metric: took 377.648528ms for pod "kube-controller-manager-addons-052340" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:54.768369  489418 pod_ready.go:83] waiting for pod "kube-proxy-k6bpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:55.167456  489418 pod_ready.go:94] pod "kube-proxy-k6bpd" is "Ready"
	I1217 20:13:55.167483  489418 pod_ready.go:86] duration metric: took 399.08797ms for pod "kube-proxy-k6bpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:55.367989  489418 pod_ready.go:83] waiting for pod "kube-scheduler-addons-052340" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:55.768538  489418 pod_ready.go:94] pod "kube-scheduler-addons-052340" is "Ready"
	I1217 20:13:55.768568  489418 pod_ready.go:86] duration metric: took 400.507771ms for pod "kube-scheduler-addons-052340" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:55.768583  489418 pod_ready.go:40] duration metric: took 1.604772229s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:13:55.824972  489418 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1217 20:13:55.828325  489418 out.go:179] * Done! kubectl is now configured to use "addons-052340" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 20:16:51 addons-052340 crio[826]: time="2025-12-17T20:16:51.458068544Z" level=info msg="Removed container b96f9df29ef7696bb5945d1859ed2b42645fb63f17923212a3cdfea9e2967c5a: kube-system/registry-creds-764b6fb674-4s27d/registry-creds" id=febb2d21-7791-4c69-8a7d-13094f680d67 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.133631715Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-qq2dj/POD" id=9b73403f-0ca8-4bd0-8753-95427908303d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.133726239Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.143616432Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-qq2dj Namespace:default ID:5b2fec9c09fdb8ec07c46748990a43fe531892e9103b8f54123f11919f6565c3 UID:1d0a357a-92a7-4fe9-9b80-300ddd734b34 NetNS:/var/run/netns/f7062955-cd06-4c4c-9dfb-8adc7902c4a2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40020f9920}] Aliases:map[]}"
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.143677274Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-qq2dj to CNI network \"kindnet\" (type=ptp)"
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.160715723Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-qq2dj Namespace:default ID:5b2fec9c09fdb8ec07c46748990a43fe531892e9103b8f54123f11919f6565c3 UID:1d0a357a-92a7-4fe9-9b80-300ddd734b34 NetNS:/var/run/netns/f7062955-cd06-4c4c-9dfb-8adc7902c4a2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40020f9920}] Aliases:map[]}"
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.160974153Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-qq2dj for CNI network kindnet (type=ptp)"
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.171421889Z" level=info msg="Ran pod sandbox 5b2fec9c09fdb8ec07c46748990a43fe531892e9103b8f54123f11919f6565c3 with infra container: default/hello-world-app-5d498dc89-qq2dj/POD" id=9b73403f-0ca8-4bd0-8753-95427908303d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.176936377Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=7b071143-343e-44f8-99a6-efbaacea6352 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.177094179Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=7b071143-343e-44f8-99a6-efbaacea6352 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.177135517Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=7b071143-343e-44f8-99a6-efbaacea6352 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.180827722Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=052d4190-3691-4075-8d02-dc14a47a0b4d name=/runtime.v1.ImageService/PullImage
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.186494293Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.828527647Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=052d4190-3691-4075-8d02-dc14a47a0b4d name=/runtime.v1.ImageService/PullImage
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.829281417Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5723bac4-0da5-4a1d-899e-e4d47d8026bd name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.83476347Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9665b1c8-ae19-44dc-811d-62b1f002e79b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.848146408Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-qq2dj/hello-world-app" id=1a758c5b-f2e4-44ca-ae61-c0b942ce53c2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.848320374Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.857741583Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.857964477Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/07eefaf084299d2c498fd73eb77fd8cc21d2f5a31a06dc205aae83fbb7fe9b43/merged/etc/passwd: no such file or directory"
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.857989544Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/07eefaf084299d2c498fd73eb77fd8cc21d2f5a31a06dc205aae83fbb7fe9b43/merged/etc/group: no such file or directory"
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.858293481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.882344604Z" level=info msg="Created container 3168964644a7c739c7fc5fa609ff8266c8437c36fc75897afb25e316b1406039: default/hello-world-app-5d498dc89-qq2dj/hello-world-app" id=1a758c5b-f2e4-44ca-ae61-c0b942ce53c2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.886745868Z" level=info msg="Starting container: 3168964644a7c739c7fc5fa609ff8266c8437c36fc75897afb25e316b1406039" id=4bcb7586-fe4f-46ac-b819-28dc669256cb name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:17:03 addons-052340 crio[826]: time="2025-12-17T20:17:03.891519485Z" level=info msg="Started container" PID=7082 containerID=3168964644a7c739c7fc5fa609ff8266c8437c36fc75897afb25e316b1406039 description=default/hello-world-app-5d498dc89-qq2dj/hello-world-app id=4bcb7586-fe4f-46ac-b819-28dc669256cb name=/runtime.v1.RuntimeService/StartContainer sandboxID=5b2fec9c09fdb8ec07c46748990a43fe531892e9103b8f54123f11919f6565c3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	3168964644a7c       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        1 second ago        Running             hello-world-app                          0                   5b2fec9c09fdb       hello-world-app-5d498dc89-qq2dj             default
	f180587eafc97       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             14 seconds ago      Exited              registry-creds                           1                   70d70fbb949b6       registry-creds-764b6fb674-4s27d             kube-system
	342b0dba3e57d       public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d                                           2 minutes ago       Running             nginx                                    0                   4dc532d92b38b       nginx                                       default
	427c6ab355b3b       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago       Running             busybox                                  0                   ffb09119c6423       busybox                                     default
	a40f8c4c9d667       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago       Running             csi-snapshotter                          0                   aea6feee2e609       csi-hostpathplugin-r5tvz                    kube-system
	5b40645a2f296       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago       Running             csi-provisioner                          0                   aea6feee2e609       csi-hostpathplugin-r5tvz                    kube-system
	4a7c07a0b9754       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago       Running             liveness-probe                           0                   aea6feee2e609       csi-hostpathplugin-r5tvz                    kube-system
	d3baa47458bc0       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago       Running             hostpath                                 0                   aea6feee2e609       csi-hostpathplugin-r5tvz                    kube-system
	87621426512f4       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             3 minutes ago       Running             controller                               0                   be4e5da881a27       ingress-nginx-controller-85d4c799dd-c8vnl   ingress-nginx
	fcc9a7828f6b8       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago       Running             node-driver-registrar                    0                   aea6feee2e609       csi-hostpathplugin-r5tvz                    kube-system
	8268763706c3e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago       Running             gcp-auth                                 0                   b097189bf7bab       gcp-auth-78565c9fb4-sc72c                   gcp-auth
	c449a66d2de59       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            3 minutes ago       Running             gadget                                   0                   99974f7283fd0       gadget-sw4gn                                gadget
	765f61bbb3ba8       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago       Running             registry-proxy                           0                   e469a47e52707       registry-proxy-5q5m2                        kube-system
	d33875f7a8b74       nvcr.io/nvidia/k8s-device-plugin@sha256:10b7b747520ba2314061b5b319d3b2766b9cec1fd9404109c607e85b30af6905                                     3 minutes ago       Running             nvidia-device-plugin-ctr                 0                   63bc0e22d348f       nvidia-device-plugin-daemonset-b7cpw        kube-system
	79a9e5f943348       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago       Running             volume-snapshot-controller               0                   58e6e4cb7fb24       snapshot-controller-7d9fbc56b8-4pf8h        kube-system
	0ed0b3ea99114       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   3 minutes ago       Exited              patch                                    0                   3f2c8eac68227       ingress-nginx-admission-patch-h9l82         ingress-nginx
	e8ab91246e2c9       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               3 minutes ago       Running             cloud-spanner-emulator                   0                   89789ba91cbea       cloud-spanner-emulator-5bdddb765-dfn99      default
	91ba82c9d4363       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   3 minutes ago       Exited              create                                   0                   7c9b1656770b5       ingress-nginx-admission-create-tqlpn        ingress-nginx
	8d7f5c62629eb       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago       Running             csi-resizer                              0                   38bc69ec4fd7b       csi-hostpath-resizer-0                      kube-system
	af528db69b5d3       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   4 minutes ago       Running             csi-external-health-monitor-controller   0                   aea6feee2e609       csi-hostpathplugin-r5tvz                    kube-system
	028c23d163f91       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago       Running             minikube-ingress-dns                     0                   ab8dfd6e45a7c       kube-ingress-dns-minikube                   kube-system
	85713e1610062       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago       Running             registry                                 0                   f632bdc32082f       registry-6b586f9694-h2xmf                   kube-system
	cb82734cbf7f9       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago       Running             metrics-server                           0                   9107e046fafae       metrics-server-85b7d694d7-5g267             kube-system
	f60e3e143ad43       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago       Running             csi-attacher                             0                   d1ea2f9507cc2       csi-hostpath-attacher-0                     kube-system
	5f30c60c285e3       docker.io/marcnuri/yakd@sha256:0b7e831df7fe4ad1c8c56a736a8d66bd86e243f6777d3c512ead47199d8fbe1a                                              4 minutes ago       Running             yakd                                     0                   1c2c566814bbb       yakd-dashboard-6654c87f9b-pgp6s             yakd-dashboard
	a0efa1d77e190       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago       Running             local-path-provisioner                   0                   ff8e53075302d       local-path-provisioner-648f6765c9-gtfnn     local-path-storage
	18d9f4acabfa7       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago       Running             volume-snapshot-controller               0                   5b40c0b75bb78       snapshot-controller-7d9fbc56b8-7528t        kube-system
	fead2bfafa736       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago       Running             coredns                                  0                   67cdd3f664299       coredns-66bc5c9577-gnsjt                    kube-system
	2b6e269a93d8b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago       Running             storage-provisioner                      0                   c4e7bbf51dc90       storage-provisioner                         kube-system
	7fd15b6b59471       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3                                           4 minutes ago       Running             kindnet-cni                              0                   9efa355305cae       kindnet-sk69j                               kube-system
	6311d7d7f6f04       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                                                             4 minutes ago       Running             kube-proxy                               0                   21e8d3982f40d       kube-proxy-k6bpd                            kube-system
	6481ede2a21b9       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                                                             4 minutes ago       Running             kube-scheduler                           0                   0aed8ad08d915       kube-scheduler-addons-052340                kube-system
	873499eab93d6       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                                                             4 minutes ago       Running             kube-apiserver                           0                   0218faf6e1933       kube-apiserver-addons-052340                kube-system
	45a4f23a594c0       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             4 minutes ago       Running             etcd                                     0                   e851b97e5df79       etcd-addons-052340                          kube-system
	dd0351330b604       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                                                             4 minutes ago       Running             kube-controller-manager                  0                   89e40a49d3838       kube-controller-manager-addons-052340       kube-system
	
	
	==> coredns [fead2bfafa7367c08236e1dd9040e848a879a067c3b70a78578d71429854ebf6] <==
	[INFO] 10.244.0.16:38007 - 5255 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002104383s
	[INFO] 10.244.0.16:38007 - 23707 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000109752s
	[INFO] 10.244.0.16:38007 - 60262 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000192797s
	[INFO] 10.244.0.16:43152 - 49778 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000163045s
	[INFO] 10.244.0.16:43152 - 49565 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00007973s
	[INFO] 10.244.0.16:41345 - 31905 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000087837s
	[INFO] 10.244.0.16:41345 - 31700 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000067636s
	[INFO] 10.244.0.16:52993 - 3472 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080517s
	[INFO] 10.244.0.16:52993 - 3300 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067668s
	[INFO] 10.244.0.16:48520 - 13721 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001134243s
	[INFO] 10.244.0.16:48520 - 13499 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001081558s
	[INFO] 10.244.0.16:54412 - 5994 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000112329s
	[INFO] 10.244.0.16:54412 - 5814 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000069884s
	[INFO] 10.244.0.20:37887 - 60931 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000151623s
	[INFO] 10.244.0.20:49921 - 57504 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000156284s
	[INFO] 10.244.0.20:44766 - 63423 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000151927s
	[INFO] 10.244.0.20:57743 - 63811 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000163579s
	[INFO] 10.244.0.20:49612 - 37551 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000194282s
	[INFO] 10.244.0.20:60295 - 48774 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126573s
	[INFO] 10.244.0.20:48399 - 2726 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001564179s
	[INFO] 10.244.0.20:43510 - 48024 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00215597s
	[INFO] 10.244.0.20:43607 - 4893 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001684081s
	[INFO] 10.244.0.20:43732 - 64296 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001687183s
	[INFO] 10.244.0.23:33844 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000174147s
	[INFO] 10.244.0.23:57182 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000099546s
	
	
	==> describe nodes <==
	Name:               addons-052340
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-052340
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=addons-052340
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T20_12_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-052340
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-052340"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 20:12:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-052340
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 20:17:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 20:14:52 +0000   Wed, 17 Dec 2025 20:12:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 20:14:52 +0000   Wed, 17 Dec 2025 20:12:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 20:14:52 +0000   Wed, 17 Dec 2025 20:12:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 20:14:52 +0000   Wed, 17 Dec 2025 20:12:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-052340
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                d9215e55-a3af-4a96-a35c-a8b4e9371aea
	  Boot ID:                    7e62835b-3b3c-4293-85c7-fb0aa46e4d00
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	  default                     cloud-spanner-emulator-5bdddb765-dfn99       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  default                     hello-world-app-5d498dc89-qq2dj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-sw4gn                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  gcp-auth                    gcp-auth-78565c9fb4-sc72c                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-c8vnl    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m34s
	  kube-system                 coredns-66bc5c9577-gnsjt                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m41s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 csi-hostpathplugin-r5tvz                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 etcd-addons-052340                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m46s
	  kube-system                 kindnet-sk69j                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m42s
	  kube-system                 kube-apiserver-addons-052340                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 kube-controller-manager-addons-052340        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-proxy-k6bpd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-scheduler-addons-052340                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 metrics-server-85b7d694d7-5g267              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m35s
	  kube-system                 nvidia-device-plugin-daemonset-b7cpw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 registry-6b586f9694-h2xmf                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 registry-creds-764b6fb674-4s27d              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 registry-proxy-5q5m2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 snapshot-controller-7d9fbc56b8-4pf8h         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 snapshot-controller-7d9fbc56b8-7528t         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  local-path-storage          local-path-provisioner-648f6765c9-gtfnn      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  yakd-dashboard              yakd-dashboard-6654c87f9b-pgp6s              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m39s  kube-proxy       
	  Normal   Starting                 4m47s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m47s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m46s  kubelet          Node addons-052340 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m46s  kubelet          Node addons-052340 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m46s  kubelet          Node addons-052340 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m42s  node-controller  Node addons-052340 event: Registered Node addons-052340 in Controller
	  Normal   NodeReady                4m27s  kubelet          Node addons-052340 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec17 19:02] hrtimer: interrupt took 71042327 ns
	[Dec17 20:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 20:12] overlayfs: idmapped layers are currently not supported
	[  +0.080288] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [45a4f23a594c0e4311b2eac64abad5f26d74229e5aa0a19e11b7b7c18f0e227d] <==
	{"level":"warn","ts":"2025-12-17T20:12:14.587122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.607118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.625375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.683690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.712657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.732601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.788258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.828131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.849275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.920203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.944635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.977457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:15.024085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:15.119501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:15.129570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:15.140512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:15.191606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:15.211329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:15.379827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:32.478622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:32.497865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:41.060987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:41.072325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:41.108339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:41.124628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50372","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [8268763706c3ec63a63b1f57cba0bf634e39c5b8a6b84bd8166ab9a00e8d2168] <==
	2025/12/17 20:13:40 GCP Auth Webhook started!
	2025/12/17 20:13:56 Ready to marshal response ...
	2025/12/17 20:13:56 Ready to write response ...
	2025/12/17 20:13:56 Ready to marshal response ...
	2025/12/17 20:13:56 Ready to write response ...
	2025/12/17 20:13:56 Ready to marshal response ...
	2025/12/17 20:13:56 Ready to write response ...
	2025/12/17 20:14:18 Ready to marshal response ...
	2025/12/17 20:14:18 Ready to write response ...
	2025/12/17 20:14:19 Ready to marshal response ...
	2025/12/17 20:14:19 Ready to write response ...
	2025/12/17 20:14:19 Ready to marshal response ...
	2025/12/17 20:14:19 Ready to write response ...
	2025/12/17 20:14:28 Ready to marshal response ...
	2025/12/17 20:14:28 Ready to write response ...
	2025/12/17 20:14:43 Ready to marshal response ...
	2025/12/17 20:14:43 Ready to write response ...
	2025/12/17 20:14:46 Ready to marshal response ...
	2025/12/17 20:14:46 Ready to write response ...
	2025/12/17 20:15:14 Ready to marshal response ...
	2025/12/17 20:15:14 Ready to write response ...
	2025/12/17 20:17:02 Ready to marshal response ...
	2025/12/17 20:17:02 Ready to write response ...
	
	
	==> kernel <==
	 20:17:05 up  2:59,  0 user,  load average: 0.87, 1.55, 1.96
	Linux addons-052340 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7fd15b6b594715c56d64ee54d7a6852f04c749cfdf1c08a4bcf60d7a2e38d3b6] <==
	I1217 20:14:58.131768       1 main.go:301] handling current node
	I1217 20:15:08.138049       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:15:08.138083       1 main.go:301] handling current node
	I1217 20:15:18.130357       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:15:18.130478       1 main.go:301] handling current node
	I1217 20:15:28.130070       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:15:28.130408       1 main.go:301] handling current node
	I1217 20:15:38.136248       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:15:38.136288       1 main.go:301] handling current node
	I1217 20:15:48.136188       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:15:48.136231       1 main.go:301] handling current node
	I1217 20:15:58.136776       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:15:58.136831       1 main.go:301] handling current node
	I1217 20:16:08.130897       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:16:08.130932       1 main.go:301] handling current node
	I1217 20:16:18.132933       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:16:18.132971       1 main.go:301] handling current node
	I1217 20:16:28.129970       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:16:28.130031       1 main.go:301] handling current node
	I1217 20:16:38.135675       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:16:38.135778       1 main.go:301] handling current node
	I1217 20:16:48.135074       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:16:48.135119       1 main.go:301] handling current node
	I1217 20:16:58.129768       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:16:58.129896       1 main.go:301] handling current node
	
	
	==> kube-apiserver [873499eab93d6e655f10d1bf2c19abe29efc74e958b7418723b5c7c02699e059] <==
	W1217 20:12:38.574012       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.249.63:443: connect: connection refused
	E1217 20:12:38.574056       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.249.63:443: connect: connection refused" logger="UnhandledError"
	W1217 20:12:38.672984       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.249.63:443: connect: connection refused
	E1217 20:12:38.673109       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.249.63:443: connect: connection refused" logger="UnhandledError"
	W1217 20:12:41.054536       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1217 20:12:41.072285       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 20:12:41.102127       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1217 20:12:41.118259       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	E1217 20:13:02.486324       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.215.219:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.215.219:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.215.219:443: connect: connection refused" logger="UnhandledError"
	W1217 20:13:02.486801       1 handler_proxy.go:99] no RequestInfo found in the context
	E1217 20:13:02.486972       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1217 20:13:02.487756       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.215.219:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.215.219:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.215.219:443: connect: connection refused" logger="UnhandledError"
	E1217 20:13:02.494634       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.215.219:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.215.219:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.215.219:443: connect: connection refused" logger="UnhandledError"
	I1217 20:13:02.592605       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1217 20:14:05.867119       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55060: use of closed network connection
	E1217 20:14:06.110763       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55086: use of closed network connection
	E1217 20:14:06.239986       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55102: use of closed network connection
	I1217 20:14:43.255508       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1217 20:14:43.553737       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.207.67"}
	I1217 20:14:53.420522       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1217 20:14:55.038915       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1217 20:17:03.022862       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.116.135"}
	
	
	==> kube-controller-manager [dd0351330b604610224568f93b842fec24e6e4e932b309a6741ffab6f0d17b9f] <==
	I1217 20:12:23.196964       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 20:12:23.196967       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 20:12:23.197285       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 20:12:23.197543       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1217 20:12:23.197769       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 20:12:23.197308       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 20:12:23.197859       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 20:12:23.197322       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 20:12:23.198021       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1217 20:12:23.202370       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 20:12:23.202471       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 20:12:23.204811       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 20:12:23.210980       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 20:12:23.216224       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 20:12:23.233666       1 shared_informer.go:356] "Caches are synced" controller="job"
	E1217 20:12:30.750065       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1217 20:12:30.775341       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1217 20:12:43.172311       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1217 20:12:53.175090       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1217 20:12:53.175518       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1217 20:12:53.175641       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1217 20:12:53.235417       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1217 20:12:53.250843       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1217 20:12:53.277731       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 20:12:53.353012       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [6311d7d7f6f04d57961651f5acfdebe6792efa3db55df12e18440d726be5abcf] <==
	I1217 20:12:25.151776       1 server_linux.go:53] "Using iptables proxy"
	I1217 20:12:25.253551       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 20:12:25.354279       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 20:12:25.354344       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1217 20:12:25.354460       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 20:12:25.421511       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 20:12:25.421573       1 server_linux.go:132] "Using iptables Proxier"
	I1217 20:12:25.435839       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 20:12:25.436282       1 server.go:527] "Version info" version="v1.34.3"
	I1217 20:12:25.436307       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:12:25.445057       1 config.go:200] "Starting service config controller"
	I1217 20:12:25.445076       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 20:12:25.445095       1 config.go:106] "Starting endpoint slice config controller"
	I1217 20:12:25.445099       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 20:12:25.445117       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 20:12:25.445129       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 20:12:25.446188       1 config.go:309] "Starting node config controller"
	I1217 20:12:25.446208       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 20:12:25.446215       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 20:12:25.546379       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 20:12:25.546420       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 20:12:25.546461       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6481ede2a21b9816559407d8564d57427a11b523832053f995eb07f4d1ce83bc] <==
	I1217 20:12:17.384400       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:12:17.387192       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 20:12:17.387291       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:12:17.387314       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:12:17.387332       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1217 20:12:17.390308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 20:12:17.390409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 20:12:17.390470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 20:12:17.400062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1217 20:12:17.400179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 20:12:17.404361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 20:12:17.404546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 20:12:17.404820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 20:12:17.404987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 20:12:17.405093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 20:12:17.407109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 20:12:17.407231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 20:12:17.407310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 20:12:17.407555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 20:12:17.407697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 20:12:17.407810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 20:12:17.407900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 20:12:17.407982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 20:12:17.408065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1217 20:12:18.488293       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 20:15:22 addons-052340 kubelet[1289]: I1217 20:15:22.161133    1289 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wd57l\" (UniqueName: \"kubernetes.io/projected/385c187a-0a5b-4efc-8aac-5884ab73a78c-kube-api-access-wd57l\") on node \"addons-052340\" DevicePath \"\""
	Dec 17 20:15:22 addons-052340 kubelet[1289]: I1217 20:15:22.161192    1289 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-a8292ecb-2020-49f9-b31e-764c19b88ec7\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^18b01f04-db85-11f0-b09c-1a2bcf4e984d\") on node \"addons-052340\" "
	Dec 17 20:15:22 addons-052340 kubelet[1289]: I1217 20:15:22.161205    1289 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/385c187a-0a5b-4efc-8aac-5884ab73a78c-gcp-creds\") on node \"addons-052340\" DevicePath \"\""
	Dec 17 20:15:22 addons-052340 kubelet[1289]: I1217 20:15:22.167558    1289 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-a8292ecb-2020-49f9-b31e-764c19b88ec7" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^18b01f04-db85-11f0-b09c-1a2bcf4e984d") on node "addons-052340"
	Dec 17 20:15:22 addons-052340 kubelet[1289]: I1217 20:15:22.262172    1289 reconciler_common.go:299] "Volume detached for volume \"pvc-a8292ecb-2020-49f9-b31e-764c19b88ec7\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^18b01f04-db85-11f0-b09c-1a2bcf4e984d\") on node \"addons-052340\" DevicePath \"\""
	Dec 17 20:15:22 addons-052340 kubelet[1289]: I1217 20:15:22.919544    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-h2xmf" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 20:15:22 addons-052340 kubelet[1289]: I1217 20:15:22.923823    1289 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="385c187a-0a5b-4efc-8aac-5884ab73a78c" path="/var/lib/kubelet/pods/385c187a-0a5b-4efc-8aac-5884ab73a78c/volumes"
	Dec 17 20:15:55 addons-052340 kubelet[1289]: I1217 20:15:55.918868    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-b7cpw" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 20:16:04 addons-052340 kubelet[1289]: I1217 20:16:04.918638    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-5q5m2" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 20:16:44 addons-052340 kubelet[1289]: I1217 20:16:44.918770    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-h2xmf" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 20:16:48 addons-052340 kubelet[1289]: I1217 20:16:48.819110    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-4s27d" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 20:16:48 addons-052340 kubelet[1289]: W1217 20:16:48.848065    1289 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b27951342508b91edb73f9914fc67563412a9b1c899ae0a2e16b6aa92b97dafa/crio-70d70fbb949b6623c597f7ffd09cc9b32c08bd9a12b59fac484b1717882f1533 WatchSource:0}: Error finding container 70d70fbb949b6623c597f7ffd09cc9b32c08bd9a12b59fac484b1717882f1533: Status 404 returned error can't find the container with id 70d70fbb949b6623c597f7ffd09cc9b32c08bd9a12b59fac484b1717882f1533
	Dec 17 20:16:50 addons-052340 kubelet[1289]: I1217 20:16:50.435408    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-4s27d" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 20:16:50 addons-052340 kubelet[1289]: I1217 20:16:50.435468    1289 scope.go:117] "RemoveContainer" containerID="b96f9df29ef7696bb5945d1859ed2b42645fb63f17923212a3cdfea9e2967c5a"
	Dec 17 20:16:51 addons-052340 kubelet[1289]: I1217 20:16:51.441104    1289 scope.go:117] "RemoveContainer" containerID="b96f9df29ef7696bb5945d1859ed2b42645fb63f17923212a3cdfea9e2967c5a"
	Dec 17 20:16:51 addons-052340 kubelet[1289]: I1217 20:16:51.441432    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-4s27d" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 20:16:51 addons-052340 kubelet[1289]: I1217 20:16:51.441470    1289 scope.go:117] "RemoveContainer" containerID="f180587eafc97055443380972b1d8d70465e1257d26453ca07e976b465354171"
	Dec 17 20:16:51 addons-052340 kubelet[1289]: E1217 20:16:51.441628    1289 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-4s27d_kube-system(dc923fe5-3d2f-4a8f-b089-bf8bf7d8040a)\"" pod="kube-system/registry-creds-764b6fb674-4s27d" podUID="dc923fe5-3d2f-4a8f-b089-bf8bf7d8040a"
	Dec 17 20:16:52 addons-052340 kubelet[1289]: I1217 20:16:52.446652    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-4s27d" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 20:16:52 addons-052340 kubelet[1289]: I1217 20:16:52.446712    1289 scope.go:117] "RemoveContainer" containerID="f180587eafc97055443380972b1d8d70465e1257d26453ca07e976b465354171"
	Dec 17 20:16:52 addons-052340 kubelet[1289]: E1217 20:16:52.446864    1289 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-4s27d_kube-system(dc923fe5-3d2f-4a8f-b089-bf8bf7d8040a)\"" pod="kube-system/registry-creds-764b6fb674-4s27d" podUID="dc923fe5-3d2f-4a8f-b089-bf8bf7d8040a"
	Dec 17 20:17:02 addons-052340 kubelet[1289]: I1217 20:17:02.955926    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz7m7\" (UniqueName: \"kubernetes.io/projected/1d0a357a-92a7-4fe9-9b80-300ddd734b34-kube-api-access-cz7m7\") pod \"hello-world-app-5d498dc89-qq2dj\" (UID: \"1d0a357a-92a7-4fe9-9b80-300ddd734b34\") " pod="default/hello-world-app-5d498dc89-qq2dj"
	Dec 17 20:17:02 addons-052340 kubelet[1289]: I1217 20:17:02.955993    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1d0a357a-92a7-4fe9-9b80-300ddd734b34-gcp-creds\") pod \"hello-world-app-5d498dc89-qq2dj\" (UID: \"1d0a357a-92a7-4fe9-9b80-300ddd734b34\") " pod="default/hello-world-app-5d498dc89-qq2dj"
	Dec 17 20:17:03 addons-052340 kubelet[1289]: W1217 20:17:03.172095    1289 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b27951342508b91edb73f9914fc67563412a9b1c899ae0a2e16b6aa92b97dafa/crio-5b2fec9c09fdb8ec07c46748990a43fe531892e9103b8f54123f11919f6565c3 WatchSource:0}: Error finding container 5b2fec9c09fdb8ec07c46748990a43fe531892e9103b8f54123f11919f6565c3: Status 404 returned error can't find the container with id 5b2fec9c09fdb8ec07c46748990a43fe531892e9103b8f54123f11919f6565c3
	Dec 17 20:17:04 addons-052340 kubelet[1289]: I1217 20:17:04.533543    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-qq2dj" podStartSLOduration=1.8806740199999998 podStartE2EDuration="2.533520603s" podCreationTimestamp="2025-12-17 20:17:02 +0000 UTC" firstStartedPulling="2025-12-17 20:17:03.177613552 +0000 UTC m=+284.373581486" lastFinishedPulling="2025-12-17 20:17:03.830460134 +0000 UTC m=+285.026428069" observedRunningTime="2025-12-17 20:17:04.532295708 +0000 UTC m=+285.728263651" watchObservedRunningTime="2025-12-17 20:17:04.533520603 +0000 UTC m=+285.729488538"
	
	
	==> storage-provisioner [2b6e269a93d8bccdf1b28bd4369628c766dbf092b4be199c7da9d8d756358c68] <==
	W1217 20:16:39.498619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:16:41.502373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:16:41.507097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:16:43.510100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:16:43.515617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:16:45.521529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:16:45.526674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:16:47.530421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:16:47.537448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:16:49.540716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:16:49.550696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:16:51.554331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:16:51.559457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:16:53.563712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:16:53.568428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:16:55.572219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:16:55.577106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:16:57.580644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:16:57.587725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:16:59.590735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:16:59.596049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:17:01.599914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:17:01.610249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:17:03.614097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:17:03.625831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-052340 -n addons-052340
helpers_test.go:270: (dbg) Run:  kubectl --context addons-052340 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-tqlpn ingress-nginx-admission-patch-h9l82
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-052340 describe pod ingress-nginx-admission-create-tqlpn ingress-nginx-admission-patch-h9l82
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-052340 describe pod ingress-nginx-admission-create-tqlpn ingress-nginx-admission-patch-h9l82: exit status 1 (112.258721ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-tqlpn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-h9l82" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-052340 describe pod ingress-nginx-admission-create-tqlpn ingress-nginx-admission-patch-h9l82: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-052340 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-052340 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (315.272735ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:17:06.502288  499039 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:17:06.503125  499039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:17:06.503149  499039 out.go:374] Setting ErrFile to fd 2...
	I1217 20:17:06.503157  499039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:17:06.503480  499039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:17:06.503929  499039 mustload.go:66] Loading cluster: addons-052340
	I1217 20:17:06.504351  499039 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:17:06.504374  499039 addons.go:622] checking whether the cluster is paused
	I1217 20:17:06.504554  499039 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:17:06.504579  499039 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:17:06.505160  499039 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:17:06.530465  499039 ssh_runner.go:195] Run: systemctl --version
	I1217 20:17:06.530545  499039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:17:06.562616  499039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:17:06.670642  499039 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:17:06.670747  499039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:17:06.722192  499039 cri.go:89] found id: "f180587eafc97055443380972b1d8d70465e1257d26453ca07e976b465354171"
	I1217 20:17:06.722219  499039 cri.go:89] found id: "a40f8c4c9d66754b6dd5a9559839fcf1fe8c00a06c71ce2f09926993d2c3d9eb"
	I1217 20:17:06.722225  499039 cri.go:89] found id: "5b40645a2f29661a7d2787522f23071b2c894de3a6b5b7638a2111a3f9485374"
	I1217 20:17:06.722230  499039 cri.go:89] found id: "4a7c07a0b97549aa9a9ce8e8e19127c9c8cf4e24abdccd8f61dcd057b97c9d88"
	I1217 20:17:06.722244  499039 cri.go:89] found id: "d3baa47458bc085e8d55815cf0497461a119e95081ce45cce1d577901a7dbf8d"
	I1217 20:17:06.722269  499039 cri.go:89] found id: "fcc9a7828f6b875df447059eabaf032b1c2d17e3c003bd1948f528d321bd3c85"
	I1217 20:17:06.722278  499039 cri.go:89] found id: "765f61bbb3ba86834abad3ede0a99451b487e6dc73b004c63a20a8f02e1d84a5"
	I1217 20:17:06.722282  499039 cri.go:89] found id: "d33875f7a8b74239db487ffa04bcf41c083f1ad65f29c43e2ae8b0dd783b521c"
	I1217 20:17:06.722286  499039 cri.go:89] found id: "79a9e5f9433489a03f4b2296098c887736f30cde921105d6ffa972cf7406abb4"
	I1217 20:17:06.722293  499039 cri.go:89] found id: "8d7f5c62629eb9e2812369690fb943f4e57ef24985d574b704ffc9b79a56d549"
	I1217 20:17:06.722299  499039 cri.go:89] found id: "af528db69b5d34f2ebcd038272fbfe9866e82f22a612276fb737683d83c256b9"
	I1217 20:17:06.722303  499039 cri.go:89] found id: "028c23d163f91b2b1e5d8071a70c7fb133ac203b414302242719cd40dba8d733"
	I1217 20:17:06.722327  499039 cri.go:89] found id: "85713e1610062bb4691b694312bf4faa71e96232dd94bc9fc564053646cde3a0"
	I1217 20:17:06.722343  499039 cri.go:89] found id: "cb82734cbf7f969e92c5a0b06400eb1d90d484748c850c4888051348e720776e"
	I1217 20:17:06.722349  499039 cri.go:89] found id: "f60e3e143ad43d5768067a6b27a4ece464edd1136c6931405c68bea040dd1097"
	I1217 20:17:06.722355  499039 cri.go:89] found id: "18d9f4acabfa7df7f42816d2545211aeda7f7362f2af1f80e188ea780addc6f8"
	I1217 20:17:06.722358  499039 cri.go:89] found id: "fead2bfafa7367c08236e1dd9040e848a879a067c3b70a78578d71429854ebf6"
	I1217 20:17:06.722368  499039 cri.go:89] found id: "2b6e269a93d8bccdf1b28bd4369628c766dbf092b4be199c7da9d8d756358c68"
	I1217 20:17:06.722372  499039 cri.go:89] found id: "7fd15b6b594715c56d64ee54d7a6852f04c749cfdf1c08a4bcf60d7a2e38d3b6"
	I1217 20:17:06.722375  499039 cri.go:89] found id: "6311d7d7f6f04d57961651f5acfdebe6792efa3db55df12e18440d726be5abcf"
	I1217 20:17:06.722380  499039 cri.go:89] found id: "6481ede2a21b9816559407d8564d57427a11b523832053f995eb07f4d1ce83bc"
	I1217 20:17:06.722383  499039 cri.go:89] found id: "873499eab93d6e655f10d1bf2c19abe29efc74e958b7418723b5c7c02699e059"
	I1217 20:17:06.722386  499039 cri.go:89] found id: "45a4f23a594c0e4311b2eac64abad5f26d74229e5aa0a19e11b7b7c18f0e227d"
	I1217 20:17:06.722410  499039 cri.go:89] found id: "dd0351330b604610224568f93b842fec24e6e4e932b309a6741ffab6f0d17b9f"
	I1217 20:17:06.722414  499039 cri.go:89] found id: ""
	I1217 20:17:06.722490  499039 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:17:06.740352  499039 out.go:203] 
	W1217 20:17:06.743463  499039 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:17:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:17:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 20:17:06.743506  499039 out.go:285] * 
	* 
	W1217 20:17:06.749520  499039 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:17:06.752747  499039 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-052340 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-052340 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-052340 addons disable ingress --alsologtostderr -v=1: exit status 11 (311.705381ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:17:06.820639  499081 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:17:06.821457  499081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:17:06.821499  499081 out.go:374] Setting ErrFile to fd 2...
	I1217 20:17:06.821523  499081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:17:06.821842  499081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:17:06.822265  499081 mustload.go:66] Loading cluster: addons-052340
	I1217 20:17:06.822739  499081 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:17:06.822784  499081 addons.go:622] checking whether the cluster is paused
	I1217 20:17:06.822934  499081 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:17:06.822975  499081 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:17:06.823648  499081 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:17:06.844296  499081 ssh_runner.go:195] Run: systemctl --version
	I1217 20:17:06.844358  499081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:17:06.863349  499081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:17:06.964466  499081 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:17:06.964587  499081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:17:07.019307  499081 cri.go:89] found id: "0fd1495014d73ade4b708229f669886fc5ccba561147833b6e39e3ad03360786"
	I1217 20:17:07.019333  499081 cri.go:89] found id: "f180587eafc97055443380972b1d8d70465e1257d26453ca07e976b465354171"
	I1217 20:17:07.019339  499081 cri.go:89] found id: "a40f8c4c9d66754b6dd5a9559839fcf1fe8c00a06c71ce2f09926993d2c3d9eb"
	I1217 20:17:07.019348  499081 cri.go:89] found id: "5b40645a2f29661a7d2787522f23071b2c894de3a6b5b7638a2111a3f9485374"
	I1217 20:17:07.019351  499081 cri.go:89] found id: "4a7c07a0b97549aa9a9ce8e8e19127c9c8cf4e24abdccd8f61dcd057b97c9d88"
	I1217 20:17:07.019365  499081 cri.go:89] found id: "d3baa47458bc085e8d55815cf0497461a119e95081ce45cce1d577901a7dbf8d"
	I1217 20:17:07.019368  499081 cri.go:89] found id: "fcc9a7828f6b875df447059eabaf032b1c2d17e3c003bd1948f528d321bd3c85"
	I1217 20:17:07.019371  499081 cri.go:89] found id: "765f61bbb3ba86834abad3ede0a99451b487e6dc73b004c63a20a8f02e1d84a5"
	I1217 20:17:07.019374  499081 cri.go:89] found id: "d33875f7a8b74239db487ffa04bcf41c083f1ad65f29c43e2ae8b0dd783b521c"
	I1217 20:17:07.019380  499081 cri.go:89] found id: "79a9e5f9433489a03f4b2296098c887736f30cde921105d6ffa972cf7406abb4"
	I1217 20:17:07.019383  499081 cri.go:89] found id: "8d7f5c62629eb9e2812369690fb943f4e57ef24985d574b704ffc9b79a56d549"
	I1217 20:17:07.019386  499081 cri.go:89] found id: "af528db69b5d34f2ebcd038272fbfe9866e82f22a612276fb737683d83c256b9"
	I1217 20:17:07.019389  499081 cri.go:89] found id: "028c23d163f91b2b1e5d8071a70c7fb133ac203b414302242719cd40dba8d733"
	I1217 20:17:07.019392  499081 cri.go:89] found id: "85713e1610062bb4691b694312bf4faa71e96232dd94bc9fc564053646cde3a0"
	I1217 20:17:07.019396  499081 cri.go:89] found id: "cb82734cbf7f969e92c5a0b06400eb1d90d484748c850c4888051348e720776e"
	I1217 20:17:07.019401  499081 cri.go:89] found id: "f60e3e143ad43d5768067a6b27a4ece464edd1136c6931405c68bea040dd1097"
	I1217 20:17:07.019404  499081 cri.go:89] found id: "18d9f4acabfa7df7f42816d2545211aeda7f7362f2af1f80e188ea780addc6f8"
	I1217 20:17:07.019408  499081 cri.go:89] found id: "fead2bfafa7367c08236e1dd9040e848a879a067c3b70a78578d71429854ebf6"
	I1217 20:17:07.019411  499081 cri.go:89] found id: "2b6e269a93d8bccdf1b28bd4369628c766dbf092b4be199c7da9d8d756358c68"
	I1217 20:17:07.019414  499081 cri.go:89] found id: "7fd15b6b594715c56d64ee54d7a6852f04c749cfdf1c08a4bcf60d7a2e38d3b6"
	I1217 20:17:07.019419  499081 cri.go:89] found id: "6311d7d7f6f04d57961651f5acfdebe6792efa3db55df12e18440d726be5abcf"
	I1217 20:17:07.019422  499081 cri.go:89] found id: "6481ede2a21b9816559407d8564d57427a11b523832053f995eb07f4d1ce83bc"
	I1217 20:17:07.019426  499081 cri.go:89] found id: "873499eab93d6e655f10d1bf2c19abe29efc74e958b7418723b5c7c02699e059"
	I1217 20:17:07.019429  499081 cri.go:89] found id: "45a4f23a594c0e4311b2eac64abad5f26d74229e5aa0a19e11b7b7c18f0e227d"
	I1217 20:17:07.019432  499081 cri.go:89] found id: "dd0351330b604610224568f93b842fec24e6e4e932b309a6741ffab6f0d17b9f"
	I1217 20:17:07.019466  499081 cri.go:89] found id: ""
	I1217 20:17:07.019634  499081 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:17:07.052259  499081 out.go:203] 
	W1217 20:17:07.055479  499081 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:17:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:17:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 20:17:07.055524  499081 out.go:285] * 
	* 
	W1217 20:17:07.061486  499081 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:17:07.064724  499081 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-052340 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-sw4gn" [5ec98801-c8ad-4f90-8932-a61d3c32e8cd] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00564941s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-052340 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-052340 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (284.04509ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:14:42.736231  496898 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:14:42.737039  496898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:14:42.737054  496898 out.go:374] Setting ErrFile to fd 2...
	I1217 20:14:42.737060  496898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:14:42.737332  496898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:14:42.737637  496898 mustload.go:66] Loading cluster: addons-052340
	I1217 20:14:42.738010  496898 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:14:42.738021  496898 addons.go:622] checking whether the cluster is paused
	I1217 20:14:42.738126  496898 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:14:42.738136  496898 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:14:42.738635  496898 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:14:42.761326  496898 ssh_runner.go:195] Run: systemctl --version
	I1217 20:14:42.761381  496898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:14:42.781138  496898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:14:42.882373  496898 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:14:42.882467  496898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:14:42.913960  496898 cri.go:89] found id: "a40f8c4c9d66754b6dd5a9559839fcf1fe8c00a06c71ce2f09926993d2c3d9eb"
	I1217 20:14:42.913991  496898 cri.go:89] found id: "5b40645a2f29661a7d2787522f23071b2c894de3a6b5b7638a2111a3f9485374"
	I1217 20:14:42.913998  496898 cri.go:89] found id: "4a7c07a0b97549aa9a9ce8e8e19127c9c8cf4e24abdccd8f61dcd057b97c9d88"
	I1217 20:14:42.914002  496898 cri.go:89] found id: "d3baa47458bc085e8d55815cf0497461a119e95081ce45cce1d577901a7dbf8d"
	I1217 20:14:42.914005  496898 cri.go:89] found id: "fcc9a7828f6b875df447059eabaf032b1c2d17e3c003bd1948f528d321bd3c85"
	I1217 20:14:42.914009  496898 cri.go:89] found id: "765f61bbb3ba86834abad3ede0a99451b487e6dc73b004c63a20a8f02e1d84a5"
	I1217 20:14:42.914012  496898 cri.go:89] found id: "d33875f7a8b74239db487ffa04bcf41c083f1ad65f29c43e2ae8b0dd783b521c"
	I1217 20:14:42.914015  496898 cri.go:89] found id: "79a9e5f9433489a03f4b2296098c887736f30cde921105d6ffa972cf7406abb4"
	I1217 20:14:42.914019  496898 cri.go:89] found id: "8d7f5c62629eb9e2812369690fb943f4e57ef24985d574b704ffc9b79a56d549"
	I1217 20:14:42.914026  496898 cri.go:89] found id: "af528db69b5d34f2ebcd038272fbfe9866e82f22a612276fb737683d83c256b9"
	I1217 20:14:42.914040  496898 cri.go:89] found id: "028c23d163f91b2b1e5d8071a70c7fb133ac203b414302242719cd40dba8d733"
	I1217 20:14:42.914044  496898 cri.go:89] found id: "85713e1610062bb4691b694312bf4faa71e96232dd94bc9fc564053646cde3a0"
	I1217 20:14:42.914047  496898 cri.go:89] found id: "cb82734cbf7f969e92c5a0b06400eb1d90d484748c850c4888051348e720776e"
	I1217 20:14:42.914050  496898 cri.go:89] found id: "f60e3e143ad43d5768067a6b27a4ece464edd1136c6931405c68bea040dd1097"
	I1217 20:14:42.914053  496898 cri.go:89] found id: "18d9f4acabfa7df7f42816d2545211aeda7f7362f2af1f80e188ea780addc6f8"
	I1217 20:14:42.914058  496898 cri.go:89] found id: "fead2bfafa7367c08236e1dd9040e848a879a067c3b70a78578d71429854ebf6"
	I1217 20:14:42.914066  496898 cri.go:89] found id: "2b6e269a93d8bccdf1b28bd4369628c766dbf092b4be199c7da9d8d756358c68"
	I1217 20:14:42.914072  496898 cri.go:89] found id: "7fd15b6b594715c56d64ee54d7a6852f04c749cfdf1c08a4bcf60d7a2e38d3b6"
	I1217 20:14:42.914075  496898 cri.go:89] found id: "6311d7d7f6f04d57961651f5acfdebe6792efa3db55df12e18440d726be5abcf"
	I1217 20:14:42.914078  496898 cri.go:89] found id: "6481ede2a21b9816559407d8564d57427a11b523832053f995eb07f4d1ce83bc"
	I1217 20:14:42.914083  496898 cri.go:89] found id: "873499eab93d6e655f10d1bf2c19abe29efc74e958b7418723b5c7c02699e059"
	I1217 20:14:42.914086  496898 cri.go:89] found id: "45a4f23a594c0e4311b2eac64abad5f26d74229e5aa0a19e11b7b7c18f0e227d"
	I1217 20:14:42.914089  496898 cri.go:89] found id: "dd0351330b604610224568f93b842fec24e6e4e932b309a6741ffab6f0d17b9f"
	I1217 20:14:42.914092  496898 cri.go:89] found id: ""
	I1217 20:14:42.914148  496898 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:14:42.937470  496898 out.go:203] 
	W1217 20:14:42.940429  496898 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:14:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:14:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 20:14:42.940461  496898 out.go:285] * 
	* 
	W1217 20:14:42.946152  496898 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:14:42.949045  496898 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-052340 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 5.292197ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-5g267" [5948d9f5-4c59-4b54-9f3e-7fe6fe4859c0] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004582499s
addons_test.go:465: (dbg) Run:  kubectl --context addons-052340 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-052340 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-052340 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (270.761886ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:14:37.447544  496811 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:14:37.448568  496811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:14:37.448614  496811 out.go:374] Setting ErrFile to fd 2...
	I1217 20:14:37.448639  496811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:14:37.448965  496811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:14:37.449331  496811 mustload.go:66] Loading cluster: addons-052340
	I1217 20:14:37.449793  496811 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:14:37.449844  496811 addons.go:622] checking whether the cluster is paused
	I1217 20:14:37.449986  496811 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:14:37.450024  496811 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:14:37.450611  496811 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:14:37.470111  496811 ssh_runner.go:195] Run: systemctl --version
	I1217 20:14:37.470165  496811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:14:37.490870  496811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:14:37.594753  496811 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:14:37.594867  496811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:14:37.628786  496811 cri.go:89] found id: "a40f8c4c9d66754b6dd5a9559839fcf1fe8c00a06c71ce2f09926993d2c3d9eb"
	I1217 20:14:37.628814  496811 cri.go:89] found id: "5b40645a2f29661a7d2787522f23071b2c894de3a6b5b7638a2111a3f9485374"
	I1217 20:14:37.628820  496811 cri.go:89] found id: "4a7c07a0b97549aa9a9ce8e8e19127c9c8cf4e24abdccd8f61dcd057b97c9d88"
	I1217 20:14:37.628824  496811 cri.go:89] found id: "d3baa47458bc085e8d55815cf0497461a119e95081ce45cce1d577901a7dbf8d"
	I1217 20:14:37.628828  496811 cri.go:89] found id: "fcc9a7828f6b875df447059eabaf032b1c2d17e3c003bd1948f528d321bd3c85"
	I1217 20:14:37.628832  496811 cri.go:89] found id: "765f61bbb3ba86834abad3ede0a99451b487e6dc73b004c63a20a8f02e1d84a5"
	I1217 20:14:37.628835  496811 cri.go:89] found id: "d33875f7a8b74239db487ffa04bcf41c083f1ad65f29c43e2ae8b0dd783b521c"
	I1217 20:14:37.628838  496811 cri.go:89] found id: "79a9e5f9433489a03f4b2296098c887736f30cde921105d6ffa972cf7406abb4"
	I1217 20:14:37.628841  496811 cri.go:89] found id: "8d7f5c62629eb9e2812369690fb943f4e57ef24985d574b704ffc9b79a56d549"
	I1217 20:14:37.628852  496811 cri.go:89] found id: "af528db69b5d34f2ebcd038272fbfe9866e82f22a612276fb737683d83c256b9"
	I1217 20:14:37.628856  496811 cri.go:89] found id: "028c23d163f91b2b1e5d8071a70c7fb133ac203b414302242719cd40dba8d733"
	I1217 20:14:37.628859  496811 cri.go:89] found id: "85713e1610062bb4691b694312bf4faa71e96232dd94bc9fc564053646cde3a0"
	I1217 20:14:37.628862  496811 cri.go:89] found id: "cb82734cbf7f969e92c5a0b06400eb1d90d484748c850c4888051348e720776e"
	I1217 20:14:37.628866  496811 cri.go:89] found id: "f60e3e143ad43d5768067a6b27a4ece464edd1136c6931405c68bea040dd1097"
	I1217 20:14:37.628869  496811 cri.go:89] found id: "18d9f4acabfa7df7f42816d2545211aeda7f7362f2af1f80e188ea780addc6f8"
	I1217 20:14:37.628887  496811 cri.go:89] found id: "fead2bfafa7367c08236e1dd9040e848a879a067c3b70a78578d71429854ebf6"
	I1217 20:14:37.628894  496811 cri.go:89] found id: "2b6e269a93d8bccdf1b28bd4369628c766dbf092b4be199c7da9d8d756358c68"
	I1217 20:14:37.628901  496811 cri.go:89] found id: "7fd15b6b594715c56d64ee54d7a6852f04c749cfdf1c08a4bcf60d7a2e38d3b6"
	I1217 20:14:37.628904  496811 cri.go:89] found id: "6311d7d7f6f04d57961651f5acfdebe6792efa3db55df12e18440d726be5abcf"
	I1217 20:14:37.628907  496811 cri.go:89] found id: "6481ede2a21b9816559407d8564d57427a11b523832053f995eb07f4d1ce83bc"
	I1217 20:14:37.628912  496811 cri.go:89] found id: "873499eab93d6e655f10d1bf2c19abe29efc74e958b7418723b5c7c02699e059"
	I1217 20:14:37.628915  496811 cri.go:89] found id: "45a4f23a594c0e4311b2eac64abad5f26d74229e5aa0a19e11b7b7c18f0e227d"
	I1217 20:14:37.628917  496811 cri.go:89] found id: "dd0351330b604610224568f93b842fec24e6e4e932b309a6741ffab6f0d17b9f"
	I1217 20:14:37.628920  496811 cri.go:89] found id: ""
	I1217 20:14:37.628972  496811 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:14:37.644709  496811 out.go:203] 
	W1217 20:14:37.647899  496811 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:14:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:14:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 20:14:37.647931  496811 out.go:285] * 
	* 
	W1217 20:14:37.653850  496811 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:14:37.656865  496811 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-052340 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.39s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1217 20:14:28.827926  488412 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1217 20:14:28.839680  488412 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1217 20:14:28.839713  488412 kapi.go:107] duration metric: took 11.802052ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 11.816748ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-052340 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-052340 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [2a2e862b-f6c1-41c7-8f86-baea90d8fba9] Pending
helpers_test.go:353: "task-pv-pod" [2a2e862b-f6c1-41c7-8f86-baea90d8fba9] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004370741s
addons_test.go:574: (dbg) Run:  kubectl --context addons-052340 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-052340 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-052340 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-052340 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-052340 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-052340 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-052340 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [385c187a-0a5b-4efc-8aac-5884ab73a78c] Pending
helpers_test.go:353: "task-pv-pod-restore" [385c187a-0a5b-4efc-8aac-5884ab73a78c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [385c187a-0a5b-4efc-8aac-5884ab73a78c] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004495767s
addons_test.go:616: (dbg) Run:  kubectl --context addons-052340 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-052340 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-052340 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-052340 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-052340 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (283.303798ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:15:22.592816  497814 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:15:22.593534  497814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:15:22.593550  497814 out.go:374] Setting ErrFile to fd 2...
	I1217 20:15:22.593556  497814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:15:22.593825  497814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:15:22.594128  497814 mustload.go:66] Loading cluster: addons-052340
	I1217 20:15:22.594509  497814 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:15:22.594529  497814 addons.go:622] checking whether the cluster is paused
	I1217 20:15:22.594639  497814 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:15:22.594654  497814 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:15:22.595168  497814 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:15:22.613474  497814 ssh_runner.go:195] Run: systemctl --version
	I1217 20:15:22.613539  497814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:15:22.634560  497814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:15:22.738569  497814 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:15:22.738651  497814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:15:22.784588  497814 cri.go:89] found id: "a40f8c4c9d66754b6dd5a9559839fcf1fe8c00a06c71ce2f09926993d2c3d9eb"
	I1217 20:15:22.784610  497814 cri.go:89] found id: "5b40645a2f29661a7d2787522f23071b2c894de3a6b5b7638a2111a3f9485374"
	I1217 20:15:22.784615  497814 cri.go:89] found id: "4a7c07a0b97549aa9a9ce8e8e19127c9c8cf4e24abdccd8f61dcd057b97c9d88"
	I1217 20:15:22.784620  497814 cri.go:89] found id: "d3baa47458bc085e8d55815cf0497461a119e95081ce45cce1d577901a7dbf8d"
	I1217 20:15:22.784623  497814 cri.go:89] found id: "fcc9a7828f6b875df447059eabaf032b1c2d17e3c003bd1948f528d321bd3c85"
	I1217 20:15:22.784626  497814 cri.go:89] found id: "765f61bbb3ba86834abad3ede0a99451b487e6dc73b004c63a20a8f02e1d84a5"
	I1217 20:15:22.784629  497814 cri.go:89] found id: "d33875f7a8b74239db487ffa04bcf41c083f1ad65f29c43e2ae8b0dd783b521c"
	I1217 20:15:22.784632  497814 cri.go:89] found id: "79a9e5f9433489a03f4b2296098c887736f30cde921105d6ffa972cf7406abb4"
	I1217 20:15:22.784635  497814 cri.go:89] found id: "8d7f5c62629eb9e2812369690fb943f4e57ef24985d574b704ffc9b79a56d549"
	I1217 20:15:22.784642  497814 cri.go:89] found id: "af528db69b5d34f2ebcd038272fbfe9866e82f22a612276fb737683d83c256b9"
	I1217 20:15:22.784645  497814 cri.go:89] found id: "028c23d163f91b2b1e5d8071a70c7fb133ac203b414302242719cd40dba8d733"
	I1217 20:15:22.784648  497814 cri.go:89] found id: "85713e1610062bb4691b694312bf4faa71e96232dd94bc9fc564053646cde3a0"
	I1217 20:15:22.784651  497814 cri.go:89] found id: "cb82734cbf7f969e92c5a0b06400eb1d90d484748c850c4888051348e720776e"
	I1217 20:15:22.784654  497814 cri.go:89] found id: "f60e3e143ad43d5768067a6b27a4ece464edd1136c6931405c68bea040dd1097"
	I1217 20:15:22.784658  497814 cri.go:89] found id: "18d9f4acabfa7df7f42816d2545211aeda7f7362f2af1f80e188ea780addc6f8"
	I1217 20:15:22.784663  497814 cri.go:89] found id: "fead2bfafa7367c08236e1dd9040e848a879a067c3b70a78578d71429854ebf6"
	I1217 20:15:22.784666  497814 cri.go:89] found id: "2b6e269a93d8bccdf1b28bd4369628c766dbf092b4be199c7da9d8d756358c68"
	I1217 20:15:22.784681  497814 cri.go:89] found id: "7fd15b6b594715c56d64ee54d7a6852f04c749cfdf1c08a4bcf60d7a2e38d3b6"
	I1217 20:15:22.784684  497814 cri.go:89] found id: "6311d7d7f6f04d57961651f5acfdebe6792efa3db55df12e18440d726be5abcf"
	I1217 20:15:22.784687  497814 cri.go:89] found id: "6481ede2a21b9816559407d8564d57427a11b523832053f995eb07f4d1ce83bc"
	I1217 20:15:22.784692  497814 cri.go:89] found id: "873499eab93d6e655f10d1bf2c19abe29efc74e958b7418723b5c7c02699e059"
	I1217 20:15:22.784695  497814 cri.go:89] found id: "45a4f23a594c0e4311b2eac64abad5f26d74229e5aa0a19e11b7b7c18f0e227d"
	I1217 20:15:22.784698  497814 cri.go:89] found id: "dd0351330b604610224568f93b842fec24e6e4e932b309a6741ffab6f0d17b9f"
	I1217 20:15:22.784706  497814 cri.go:89] found id: ""
	I1217 20:15:22.784756  497814 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:15:22.801409  497814 out.go:203] 
	W1217 20:15:22.804394  497814 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:15:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:15:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 20:15:22.804418  497814 out.go:285] * 
	* 
	W1217 20:15:22.810129  497814 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:15:22.812989  497814 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-052340 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-052340 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-052340 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (281.161111ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:15:22.883069  497867 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:15:22.883789  497867 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:15:22.883804  497867 out.go:374] Setting ErrFile to fd 2...
	I1217 20:15:22.883810  497867 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:15:22.884073  497867 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:15:22.884446  497867 mustload.go:66] Loading cluster: addons-052340
	I1217 20:15:22.884968  497867 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:15:22.884988  497867 addons.go:622] checking whether the cluster is paused
	I1217 20:15:22.885136  497867 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:15:22.885149  497867 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:15:22.885718  497867 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:15:22.906848  497867 ssh_runner.go:195] Run: systemctl --version
	I1217 20:15:22.906913  497867 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:15:22.943014  497867 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:15:23.038777  497867 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:15:23.038868  497867 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:15:23.068731  497867 cri.go:89] found id: "a40f8c4c9d66754b6dd5a9559839fcf1fe8c00a06c71ce2f09926993d2c3d9eb"
	I1217 20:15:23.068804  497867 cri.go:89] found id: "5b40645a2f29661a7d2787522f23071b2c894de3a6b5b7638a2111a3f9485374"
	I1217 20:15:23.068816  497867 cri.go:89] found id: "4a7c07a0b97549aa9a9ce8e8e19127c9c8cf4e24abdccd8f61dcd057b97c9d88"
	I1217 20:15:23.068821  497867 cri.go:89] found id: "d3baa47458bc085e8d55815cf0497461a119e95081ce45cce1d577901a7dbf8d"
	I1217 20:15:23.068824  497867 cri.go:89] found id: "fcc9a7828f6b875df447059eabaf032b1c2d17e3c003bd1948f528d321bd3c85"
	I1217 20:15:23.068828  497867 cri.go:89] found id: "765f61bbb3ba86834abad3ede0a99451b487e6dc73b004c63a20a8f02e1d84a5"
	I1217 20:15:23.068831  497867 cri.go:89] found id: "d33875f7a8b74239db487ffa04bcf41c083f1ad65f29c43e2ae8b0dd783b521c"
	I1217 20:15:23.068834  497867 cri.go:89] found id: "79a9e5f9433489a03f4b2296098c887736f30cde921105d6ffa972cf7406abb4"
	I1217 20:15:23.068838  497867 cri.go:89] found id: "8d7f5c62629eb9e2812369690fb943f4e57ef24985d574b704ffc9b79a56d549"
	I1217 20:15:23.068852  497867 cri.go:89] found id: "af528db69b5d34f2ebcd038272fbfe9866e82f22a612276fb737683d83c256b9"
	I1217 20:15:23.068859  497867 cri.go:89] found id: "028c23d163f91b2b1e5d8071a70c7fb133ac203b414302242719cd40dba8d733"
	I1217 20:15:23.068862  497867 cri.go:89] found id: "85713e1610062bb4691b694312bf4faa71e96232dd94bc9fc564053646cde3a0"
	I1217 20:15:23.068865  497867 cri.go:89] found id: "cb82734cbf7f969e92c5a0b06400eb1d90d484748c850c4888051348e720776e"
	I1217 20:15:23.068868  497867 cri.go:89] found id: "f60e3e143ad43d5768067a6b27a4ece464edd1136c6931405c68bea040dd1097"
	I1217 20:15:23.068871  497867 cri.go:89] found id: "18d9f4acabfa7df7f42816d2545211aeda7f7362f2af1f80e188ea780addc6f8"
	I1217 20:15:23.068880  497867 cri.go:89] found id: "fead2bfafa7367c08236e1dd9040e848a879a067c3b70a78578d71429854ebf6"
	I1217 20:15:23.068887  497867 cri.go:89] found id: "2b6e269a93d8bccdf1b28bd4369628c766dbf092b4be199c7da9d8d756358c68"
	I1217 20:15:23.068893  497867 cri.go:89] found id: "7fd15b6b594715c56d64ee54d7a6852f04c749cfdf1c08a4bcf60d7a2e38d3b6"
	I1217 20:15:23.068896  497867 cri.go:89] found id: "6311d7d7f6f04d57961651f5acfdebe6792efa3db55df12e18440d726be5abcf"
	I1217 20:15:23.068899  497867 cri.go:89] found id: "6481ede2a21b9816559407d8564d57427a11b523832053f995eb07f4d1ce83bc"
	I1217 20:15:23.068904  497867 cri.go:89] found id: "873499eab93d6e655f10d1bf2c19abe29efc74e958b7418723b5c7c02699e059"
	I1217 20:15:23.068917  497867 cri.go:89] found id: "45a4f23a594c0e4311b2eac64abad5f26d74229e5aa0a19e11b7b7c18f0e227d"
	I1217 20:15:23.068920  497867 cri.go:89] found id: "dd0351330b604610224568f93b842fec24e6e4e932b309a6741ffab6f0d17b9f"
	I1217 20:15:23.068923  497867 cri.go:89] found id: ""
	I1217 20:15:23.068976  497867 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:15:23.085330  497867 out.go:203] 
	W1217 20:15:23.088319  497867 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:15:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:15:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 20:15:23.088352  497867 out.go:285] * 
	* 
	W1217 20:15:23.094154  497867 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:15:23.097288  497867 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-052340 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (54.28s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-052340 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-052340 --alsologtostderr -v=1: exit status 11 (377.22832ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:14:28.742846  496193 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:14:28.743674  496193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:14:28.743721  496193 out.go:374] Setting ErrFile to fd 2...
	I1217 20:14:28.743754  496193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:14:28.744060  496193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:14:28.744418  496193 mustload.go:66] Loading cluster: addons-052340
	I1217 20:14:28.744900  496193 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:14:28.744951  496193 addons.go:622] checking whether the cluster is paused
	I1217 20:14:28.745089  496193 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:14:28.745126  496193 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:14:28.745683  496193 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:14:28.777274  496193 ssh_runner.go:195] Run: systemctl --version
	I1217 20:14:28.777346  496193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:14:28.814472  496193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:14:28.932822  496193 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:14:28.932919  496193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:14:28.995121  496193 cri.go:89] found id: "a40f8c4c9d66754b6dd5a9559839fcf1fe8c00a06c71ce2f09926993d2c3d9eb"
	I1217 20:14:28.995141  496193 cri.go:89] found id: "5b40645a2f29661a7d2787522f23071b2c894de3a6b5b7638a2111a3f9485374"
	I1217 20:14:28.995146  496193 cri.go:89] found id: "4a7c07a0b97549aa9a9ce8e8e19127c9c8cf4e24abdccd8f61dcd057b97c9d88"
	I1217 20:14:28.995150  496193 cri.go:89] found id: "d3baa47458bc085e8d55815cf0497461a119e95081ce45cce1d577901a7dbf8d"
	I1217 20:14:28.995153  496193 cri.go:89] found id: "fcc9a7828f6b875df447059eabaf032b1c2d17e3c003bd1948f528d321bd3c85"
	I1217 20:14:28.995157  496193 cri.go:89] found id: "765f61bbb3ba86834abad3ede0a99451b487e6dc73b004c63a20a8f02e1d84a5"
	I1217 20:14:28.995160  496193 cri.go:89] found id: "d33875f7a8b74239db487ffa04bcf41c083f1ad65f29c43e2ae8b0dd783b521c"
	I1217 20:14:28.995163  496193 cri.go:89] found id: "79a9e5f9433489a03f4b2296098c887736f30cde921105d6ffa972cf7406abb4"
	I1217 20:14:28.995166  496193 cri.go:89] found id: "8d7f5c62629eb9e2812369690fb943f4e57ef24985d574b704ffc9b79a56d549"
	I1217 20:14:28.995174  496193 cri.go:89] found id: "af528db69b5d34f2ebcd038272fbfe9866e82f22a612276fb737683d83c256b9"
	I1217 20:14:28.995177  496193 cri.go:89] found id: "028c23d163f91b2b1e5d8071a70c7fb133ac203b414302242719cd40dba8d733"
	I1217 20:14:28.995180  496193 cri.go:89] found id: "85713e1610062bb4691b694312bf4faa71e96232dd94bc9fc564053646cde3a0"
	I1217 20:14:28.995183  496193 cri.go:89] found id: "cb82734cbf7f969e92c5a0b06400eb1d90d484748c850c4888051348e720776e"
	I1217 20:14:28.995186  496193 cri.go:89] found id: "f60e3e143ad43d5768067a6b27a4ece464edd1136c6931405c68bea040dd1097"
	I1217 20:14:28.995189  496193 cri.go:89] found id: "18d9f4acabfa7df7f42816d2545211aeda7f7362f2af1f80e188ea780addc6f8"
	I1217 20:14:28.995196  496193 cri.go:89] found id: "fead2bfafa7367c08236e1dd9040e848a879a067c3b70a78578d71429854ebf6"
	I1217 20:14:28.995200  496193 cri.go:89] found id: "2b6e269a93d8bccdf1b28bd4369628c766dbf092b4be199c7da9d8d756358c68"
	I1217 20:14:28.995205  496193 cri.go:89] found id: "7fd15b6b594715c56d64ee54d7a6852f04c749cfdf1c08a4bcf60d7a2e38d3b6"
	I1217 20:14:28.995209  496193 cri.go:89] found id: "6311d7d7f6f04d57961651f5acfdebe6792efa3db55df12e18440d726be5abcf"
	I1217 20:14:28.995212  496193 cri.go:89] found id: "6481ede2a21b9816559407d8564d57427a11b523832053f995eb07f4d1ce83bc"
	I1217 20:14:28.995217  496193 cri.go:89] found id: "873499eab93d6e655f10d1bf2c19abe29efc74e958b7418723b5c7c02699e059"
	I1217 20:14:28.995220  496193 cri.go:89] found id: "45a4f23a594c0e4311b2eac64abad5f26d74229e5aa0a19e11b7b7c18f0e227d"
	I1217 20:14:28.995223  496193 cri.go:89] found id: "dd0351330b604610224568f93b842fec24e6e4e932b309a6741ffab6f0d17b9f"
	I1217 20:14:28.995227  496193 cri.go:89] found id: ""
	I1217 20:14:28.995281  496193 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:14:29.019509  496193 out.go:203] 
	W1217 20:14:29.022562  496193 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:14:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:14:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 20:14:29.022587  496193 out.go:285] * 
	* 
	W1217 20:14:29.028455  496193 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:14:29.031243  496193 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-052340 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-052340
helpers_test.go:244: (dbg) docker inspect addons-052340:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b27951342508b91edb73f9914fc67563412a9b1c899ae0a2e16b6aa92b97dafa",
	        "Created": "2025-12-17T20:11:52.64290744Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 489812,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:11:52.704179967Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/b27951342508b91edb73f9914fc67563412a9b1c899ae0a2e16b6aa92b97dafa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b27951342508b91edb73f9914fc67563412a9b1c899ae0a2e16b6aa92b97dafa/hostname",
	        "HostsPath": "/var/lib/docker/containers/b27951342508b91edb73f9914fc67563412a9b1c899ae0a2e16b6aa92b97dafa/hosts",
	        "LogPath": "/var/lib/docker/containers/b27951342508b91edb73f9914fc67563412a9b1c899ae0a2e16b6aa92b97dafa/b27951342508b91edb73f9914fc67563412a9b1c899ae0a2e16b6aa92b97dafa-json.log",
	        "Name": "/addons-052340",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-052340:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-052340",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b27951342508b91edb73f9914fc67563412a9b1c899ae0a2e16b6aa92b97dafa",
	                "LowerDir": "/var/lib/docker/overlay2/41d14526c9f9e2c943fa6c174fd3eacaacc70a9877d3ba01090549f4673e6a14-init/diff:/var/lib/docker/overlay2/c4759b344ecb109c83d66e4ef56c76903f1d1e597efb86ab2d2c7911b1130a8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/41d14526c9f9e2c943fa6c174fd3eacaacc70a9877d3ba01090549f4673e6a14/merged",
	                "UpperDir": "/var/lib/docker/overlay2/41d14526c9f9e2c943fa6c174fd3eacaacc70a9877d3ba01090549f4673e6a14/diff",
	                "WorkDir": "/var/lib/docker/overlay2/41d14526c9f9e2c943fa6c174fd3eacaacc70a9877d3ba01090549f4673e6a14/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-052340",
	                "Source": "/var/lib/docker/volumes/addons-052340/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-052340",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-052340",
	                "name.minikube.sigs.k8s.io": "addons-052340",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bf40f6023f9adc85017d09c172eba670e4306c6dafedce644fc3f08c08da1e32",
	            "SandboxKey": "/var/run/docker/netns/bf40f6023f9a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-052340": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:14:c3:fc:f0:b4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d76a8eefec5c23c1ba4193d7d9ab608b42400bf214e60dbe902877081ec089a0",
	                    "EndpointID": "0b85bcda4414468275eb68480bcf440a128467b53a640057593867b94979d6f7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-052340",
	                        "b27951342508"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-052340 -n addons-052340
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-052340 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-052340 logs -n 25: (1.612414096s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │ 17 Dec 25 20:11 UTC │
	│ delete  │ -p download-only-705173                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-705173   │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │ 17 Dec 25 20:11 UTC │
	│ start   │ -o=json --download-only -p download-only-496345 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                           │ download-only-496345   │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │ 17 Dec 25 20:11 UTC │
	│ delete  │ -p download-only-496345                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-496345   │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │ 17 Dec 25 20:11 UTC │
	│ delete  │ -p download-only-306451                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-306451   │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │ 17 Dec 25 20:11 UTC │
	│ delete  │ -p download-only-705173                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-705173   │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │ 17 Dec 25 20:11 UTC │
	│ delete  │ -p download-only-496345                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-496345   │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │ 17 Dec 25 20:11 UTC │
	│ start   │ --download-only -p download-docker-133846 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-133846 │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │                     │
	│ delete  │ -p download-docker-133846                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-133846 │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │ 17 Dec 25 20:11 UTC │
	│ start   │ --download-only -p binary-mirror-177191 --alsologtostderr --binary-mirror http://127.0.0.1:41401 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-177191   │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │                     │
	│ delete  │ -p binary-mirror-177191                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-177191   │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │ 17 Dec 25 20:11 UTC │
	│ addons  │ enable dashboard -p addons-052340                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │                     │
	│ addons  │ disable dashboard -p addons-052340                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │                     │
	│ start   │ -p addons-052340 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │ 17 Dec 25 20:13 UTC │
	│ addons  │ addons-052340 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:13 UTC │                     │
	│ addons  │ addons-052340 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │                     │
	│ addons  │ addons-052340 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │                     │
	│ addons  │ addons-052340 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │                     │
	│ ip      │ addons-052340 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │ 17 Dec 25 20:14 UTC │
	│ addons  │ addons-052340 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │                     │
	│ ssh     │ addons-052340 ssh cat /opt/local-path-provisioner/pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │ 17 Dec 25 20:14 UTC │
	│ addons  │ addons-052340 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │                     │
	│ addons  │ addons-052340 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │                     │
	│ addons  │ enable headlamp -p addons-052340 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-052340          │ jenkins │ v1.37.0 │ 17 Dec 25 20:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:11:46
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:11:46.376525  489418 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:11:46.376697  489418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:11:46.376729  489418 out.go:374] Setting ErrFile to fd 2...
	I1217 20:11:46.376751  489418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:11:46.377025  489418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:11:46.377550  489418 out.go:368] Setting JSON to false
	I1217 20:11:46.378393  489418 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10456,"bootTime":1765991851,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:11:46.378496  489418 start.go:143] virtualization:  
	I1217 20:11:46.380149  489418 out.go:179] * [addons-052340] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:11:46.381510  489418 notify.go:221] Checking for updates...
	I1217 20:11:46.383938  489418 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:11:46.385162  489418 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:11:46.386267  489418 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:11:46.387359  489418 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:11:46.388470  489418 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:11:46.389747  489418 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:11:46.391252  489418 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:11:46.413237  489418 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:11:46.413366  489418 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:11:46.478087  489418 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-17 20:11:46.468719631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:11:46.478196  489418 docker.go:319] overlay module found
	I1217 20:11:46.479929  489418 out.go:179] * Using the docker driver based on user configuration
	I1217 20:11:46.481354  489418 start.go:309] selected driver: docker
	I1217 20:11:46.481371  489418 start.go:927] validating driver "docker" against <nil>
	I1217 20:11:46.481393  489418 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:11:46.482161  489418 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:11:46.536044  489418 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-17 20:11:46.5267006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:11:46.536205  489418 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 20:11:46.536438  489418 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:11:46.537858  489418 out.go:179] * Using Docker driver with root privileges
	I1217 20:11:46.539201  489418 cni.go:84] Creating CNI manager for ""
	I1217 20:11:46.539265  489418 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:11:46.539279  489418 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 20:11:46.539358  489418 start.go:353] cluster config:
	{Name:addons-052340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-052340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1217 20:11:46.540768  489418 out.go:179] * Starting "addons-052340" primary control-plane node in "addons-052340" cluster
	I1217 20:11:46.542021  489418 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:11:46.543869  489418 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:11:46.545388  489418 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:11:46.545398  489418 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:11:46.545433  489418 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1217 20:11:46.545449  489418 cache.go:65] Caching tarball of preloaded images
	I1217 20:11:46.545533  489418 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:11:46.545543  489418 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:11:46.545893  489418 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/config.json ...
	I1217 20:11:46.545914  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/config.json: {Name:mk1f94198e9fff9e1603e7d6d656a228af0111a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:11:46.564988  489418 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:11:46.565012  489418 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:11:46.565028  489418 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:11:46.565059  489418 start.go:360] acquireMachinesLock for addons-052340: {Name:mk6a23b5fdd10e06656251611d99d4457cfa70cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:11:46.565166  489418 start.go:364] duration metric: took 88.181µs to acquireMachinesLock for "addons-052340"
	I1217 20:11:46.565197  489418 start.go:93] Provisioning new machine with config: &{Name:addons-052340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-052340 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:11:46.565277  489418 start.go:125] createHost starting for "" (driver="docker")
	I1217 20:11:46.566877  489418 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1217 20:11:46.567107  489418 start.go:159] libmachine.API.Create for "addons-052340" (driver="docker")
	I1217 20:11:46.567141  489418 client.go:173] LocalClient.Create starting
	I1217 20:11:46.567257  489418 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem
	I1217 20:11:46.708126  489418 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem
	I1217 20:11:47.009553  489418 cli_runner.go:164] Run: docker network inspect addons-052340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 20:11:47.026907  489418 cli_runner.go:211] docker network inspect addons-052340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 20:11:47.027005  489418 network_create.go:284] running [docker network inspect addons-052340] to gather additional debugging logs...
	I1217 20:11:47.027028  489418 cli_runner.go:164] Run: docker network inspect addons-052340
	W1217 20:11:47.044833  489418 cli_runner.go:211] docker network inspect addons-052340 returned with exit code 1
	I1217 20:11:47.044863  489418 network_create.go:287] error running [docker network inspect addons-052340]: docker network inspect addons-052340: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-052340 not found
	I1217 20:11:47.044891  489418 network_create.go:289] output of [docker network inspect addons-052340]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-052340 not found
	
	** /stderr **
	I1217 20:11:47.045010  489418 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:11:47.062646  489418 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a44f20}
	I1217 20:11:47.062708  489418 network_create.go:124] attempt to create docker network addons-052340 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1217 20:11:47.062766  489418 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-052340 addons-052340
	I1217 20:11:47.125107  489418 network_create.go:108] docker network addons-052340 192.168.49.0/24 created
	I1217 20:11:47.125144  489418 kic.go:121] calculated static IP "192.168.49.2" for the "addons-052340" container
	I1217 20:11:47.125242  489418 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 20:11:47.142547  489418 cli_runner.go:164] Run: docker volume create addons-052340 --label name.minikube.sigs.k8s.io=addons-052340 --label created_by.minikube.sigs.k8s.io=true
	I1217 20:11:47.160469  489418 oci.go:103] Successfully created a docker volume addons-052340
	I1217 20:11:47.160556  489418 cli_runner.go:164] Run: docker run --rm --name addons-052340-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-052340 --entrypoint /usr/bin/test -v addons-052340:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 20:11:48.607937  489418 cli_runner.go:217] Completed: docker run --rm --name addons-052340-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-052340 --entrypoint /usr/bin/test -v addons-052340:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.447330667s)
	I1217 20:11:48.607969  489418 oci.go:107] Successfully prepared a docker volume addons-052340
	I1217 20:11:48.608014  489418 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:11:48.608023  489418 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 20:11:48.608095  489418 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-052340:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 20:11:52.573544  489418 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-052340:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.965397174s)
	I1217 20:11:52.573576  489418 kic.go:203] duration metric: took 3.96554842s to extract preloaded images to volume ...
	W1217 20:11:52.573712  489418 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 20:11:52.573824  489418 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 20:11:52.627936  489418 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-052340 --name addons-052340 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-052340 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-052340 --network addons-052340 --ip 192.168.49.2 --volume addons-052340:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 20:11:52.929576  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Running}}
	I1217 20:11:52.950604  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:11:52.975745  489418 cli_runner.go:164] Run: docker exec addons-052340 stat /var/lib/dpkg/alternatives/iptables
	I1217 20:11:53.039386  489418 oci.go:144] the created container "addons-052340" has a running status.
	I1217 20:11:53.039414  489418 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa...
	I1217 20:11:53.405617  489418 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 20:11:53.427967  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:11:53.454843  489418 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 20:11:53.454869  489418 kic_runner.go:114] Args: [docker exec --privileged addons-052340 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 20:11:53.524582  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:11:53.552457  489418 machine.go:94] provisionDockerMachine start ...
	I1217 20:11:53.552543  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:11:53.580175  489418 main.go:143] libmachine: Using SSH client type: native
	I1217 20:11:53.580495  489418 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1217 20:11:53.580511  489418 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:11:53.581114  489418 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59762->127.0.0.1:33163: read: connection reset by peer
	I1217 20:11:56.714967  489418 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-052340
	
	I1217 20:11:56.714995  489418 ubuntu.go:182] provisioning hostname "addons-052340"
	I1217 20:11:56.715056  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:11:56.733198  489418 main.go:143] libmachine: Using SSH client type: native
	I1217 20:11:56.733576  489418 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1217 20:11:56.733600  489418 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-052340 && echo "addons-052340" | sudo tee /etc/hostname
	I1217 20:11:56.876817  489418 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-052340
	
	I1217 20:11:56.876899  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:11:56.894344  489418 main.go:143] libmachine: Using SSH client type: native
	I1217 20:11:56.894644  489418 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1217 20:11:56.894677  489418 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-052340' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-052340/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-052340' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:11:57.031890  489418 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:11:57.031918  489418 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:11:57.031949  489418 ubuntu.go:190] setting up certificates
	I1217 20:11:57.031959  489418 provision.go:84] configureAuth start
	I1217 20:11:57.032020  489418 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-052340
	I1217 20:11:57.049428  489418 provision.go:143] copyHostCerts
	I1217 20:11:57.049517  489418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:11:57.049649  489418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:11:57.049722  489418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:11:57.049784  489418 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.addons-052340 san=[127.0.0.1 192.168.49.2 addons-052340 localhost minikube]
	I1217 20:11:57.275424  489418 provision.go:177] copyRemoteCerts
	I1217 20:11:57.275505  489418 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:11:57.275545  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:11:57.293416  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:11:57.391686  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:11:57.409470  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 20:11:57.426555  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 20:11:57.444377  489418 provision.go:87] duration metric: took 412.405297ms to configureAuth
	I1217 20:11:57.444404  489418 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:11:57.444597  489418 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:11:57.444707  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:11:57.461730  489418 main.go:143] libmachine: Using SSH client type: native
	I1217 20:11:57.462058  489418 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1217 20:11:57.462072  489418 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:11:57.930572  489418 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:11:57.930656  489418 machine.go:97] duration metric: took 4.378178292s to provisionDockerMachine
	I1217 20:11:57.930682  489418 client.go:176] duration metric: took 11.363528665s to LocalClient.Create
	I1217 20:11:57.930728  489418 start.go:167] duration metric: took 11.363621482s to libmachine.API.Create "addons-052340"
	I1217 20:11:57.930763  489418 start.go:293] postStartSetup for "addons-052340" (driver="docker")
	I1217 20:11:57.930790  489418 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:11:57.930896  489418 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:11:57.930961  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:11:57.948396  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:11:58.048207  489418 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:11:58.051718  489418 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:11:58.051763  489418 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:11:58.051778  489418 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:11:58.051853  489418 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:11:58.051886  489418 start.go:296] duration metric: took 121.101465ms for postStartSetup
	I1217 20:11:58.052221  489418 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-052340
	I1217 20:11:58.069490  489418 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/config.json ...
	I1217 20:11:58.069789  489418 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:11:58.069849  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:11:58.088555  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:11:58.184710  489418 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:11:58.189364  489418 start.go:128] duration metric: took 11.624073501s to createHost
	I1217 20:11:58.189397  489418 start.go:83] releasing machines lock for "addons-052340", held for 11.624219742s
	I1217 20:11:58.189468  489418 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-052340
	I1217 20:11:58.206175  489418 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:11:58.206237  489418 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:11:58.206277  489418 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:11:58.206306  489418 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	W1217 20:11:58.206393  489418 start.go:789] pre-probe CA setup failed: create ca cert file asset for /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt: stat: stat /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt: no such file or directory
	I1217 20:11:58.206462  489418 ssh_runner.go:195] Run: cat /version.json
	I1217 20:11:58.206506  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:11:58.206776  489418 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:11:58.206833  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:11:58.227023  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:11:58.243795  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:11:58.331107  489418 ssh_runner.go:195] Run: systemctl --version
	I1217 20:11:58.426312  489418 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:11:58.470733  489418 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:11:58.475230  489418 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:11:58.475303  489418 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:11:58.504558  489418 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 20:11:58.504628  489418 start.go:496] detecting cgroup driver to use...
	I1217 20:11:58.504668  489418 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:11:58.504726  489418 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:11:58.523053  489418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:11:58.536489  489418 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:11:58.536555  489418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:11:58.554830  489418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:11:58.573852  489418 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:11:58.696845  489418 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:11:58.813085  489418 docker.go:234] disabling docker service ...
	I1217 20:11:58.813201  489418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:11:58.834375  489418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:11:58.848224  489418 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:11:58.968701  489418 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:11:59.101274  489418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:11:59.114173  489418 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:11:59.128115  489418 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:11:59.128183  489418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:11:59.137733  489418 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:11:59.137802  489418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:11:59.147317  489418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:11:59.156719  489418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:11:59.165251  489418 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:11:59.173231  489418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:11:59.181879  489418 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:11:59.195349  489418 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:11:59.204330  489418 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:11:59.211917  489418 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:11:59.219160  489418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:11:59.330531  489418 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:11:59.514675  489418 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:11:59.514771  489418 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:11:59.518554  489418 start.go:564] Will wait 60s for crictl version
	I1217 20:11:59.518622  489418 ssh_runner.go:195] Run: which crictl
	I1217 20:11:59.522262  489418 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:11:59.554889  489418 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:11:59.555000  489418 ssh_runner.go:195] Run: crio --version
	I1217 20:11:59.584235  489418 ssh_runner.go:195] Run: crio --version
	I1217 20:11:59.617918  489418 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 20:11:59.620791  489418 cli_runner.go:164] Run: docker network inspect addons-052340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:11:59.636829  489418 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:11:59.640673  489418 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:11:59.650436  489418 kubeadm.go:884] updating cluster {Name:addons-052340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-052340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:11:59.650560  489418 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:11:59.650614  489418 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:11:59.699932  489418 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:11:59.699964  489418 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:11:59.700024  489418 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:11:59.724844  489418 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:11:59.724871  489418 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:11:59.724879  489418 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1217 20:11:59.724985  489418 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-052340 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-052340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:11:59.725070  489418 ssh_runner.go:195] Run: crio config
	I1217 20:11:59.777469  489418 cni.go:84] Creating CNI manager for ""
	I1217 20:11:59.777492  489418 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:11:59.777502  489418 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:11:59.777535  489418 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-052340 NodeName:addons-052340 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:11:59.777688  489418 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-052340"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:11:59.777768  489418 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:11:59.785482  489418 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:11:59.785555  489418 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:11:59.793284  489418 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 20:11:59.805897  489418 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:11:59.819492  489418 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1217 20:11:59.832503  489418 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:11:59.836196  489418 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:11:59.846264  489418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:11:59.954341  489418 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:11:59.969581  489418 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340 for IP: 192.168.49.2
	I1217 20:11:59.969600  489418 certs.go:195] generating shared ca certs ...
	I1217 20:11:59.969617  489418 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:11:59.969832  489418 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:12:00.418712  489418 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt ...
	I1217 20:12:00.418761  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt: {Name:mkc7b12a3381fbc450f246bfde676cc2781e84c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:00.419063  489418 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key ...
	I1217 20:12:00.419261  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key: {Name:mk2b0252dea576b037b642bb6b70cd65f4ad3caa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:00.419688  489418 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:12:00.774249  489418 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt ...
	I1217 20:12:00.774286  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt: {Name:mk3281bafadf3317e622593d0a7b922e4a39df91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:00.774470  489418 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key ...
	I1217 20:12:00.774478  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key: {Name:mkab881eb90efdc460b8def7dbaea8828c0e513d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:00.774577  489418 certs.go:257] generating profile certs ...
	I1217 20:12:00.774640  489418 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.key
	I1217 20:12:00.774653  489418 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt with IP's: []
	I1217 20:12:01.147204  489418 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt ...
	I1217 20:12:01.147238  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: {Name:mk31f366572cdb41cb330e01f195ae0036e4e610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:01.147437  489418 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.key ...
	I1217 20:12:01.147450  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.key: {Name:mkf1c8df39135eec2278174f3ef12fb552c66234 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:01.147550  489418 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.key.00ea8426
	I1217 20:12:01.147572  489418 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.crt.00ea8426 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1217 20:12:01.329464  489418 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.crt.00ea8426 ...
	I1217 20:12:01.329499  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.crt.00ea8426: {Name:mk014ba9e983f4a1a64ff112d57da7d7525e6189 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:01.329694  489418 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.key.00ea8426 ...
	I1217 20:12:01.329708  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.key.00ea8426: {Name:mk1ee45456823d68e7c5052c1a87a3d9c89d927f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:01.329791  489418 certs.go:382] copying /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.crt.00ea8426 -> /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.crt
	I1217 20:12:01.329871  489418 certs.go:386] copying /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.key.00ea8426 -> /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.key
	I1217 20:12:01.329926  489418 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/proxy-client.key
	I1217 20:12:01.329942  489418 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/proxy-client.crt with IP's: []
	I1217 20:12:01.463380  489418 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/proxy-client.crt ...
	I1217 20:12:01.463418  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/proxy-client.crt: {Name:mk7e7bdac14a1ae213acf34c87bb2bbde9d67604 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:01.463623  489418 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/proxy-client.key ...
	I1217 20:12:01.463641  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/proxy-client.key: {Name:mk52409309336359a936c0b7a282fe0bba85a85b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:01.463834  489418 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:12:01.463881  489418 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:12:01.463912  489418 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:12:01.463942  489418 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:12:01.464525  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:12:01.486389  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:12:01.505993  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:12:01.524043  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:12:01.542638  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 20:12:01.561179  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:12:01.584245  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:12:01.604887  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:12:01.623759  489418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:12:01.643808  489418 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:12:01.657093  489418 ssh_runner.go:195] Run: openssl version
	I1217 20:12:01.663755  489418 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:12:01.671652  489418 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:12:01.679525  489418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:12:01.683454  489418 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:12:01.683531  489418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:12:01.725140  489418 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:12:01.732949  489418 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 20:12:01.740674  489418 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:12:01.744480  489418 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 20:12:01.744532  489418 kubeadm.go:401] StartCluster: {Name:addons-052340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-052340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:12:01.744604  489418 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:12:01.744661  489418 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:12:01.775331  489418 cri.go:89] found id: ""
	I1217 20:12:01.775405  489418 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:12:01.783479  489418 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:12:01.791413  489418 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:12:01.791500  489418 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:12:01.799469  489418 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:12:01.799491  489418 kubeadm.go:158] found existing configuration files:
	
	I1217 20:12:01.799546  489418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 20:12:01.807520  489418 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:12:01.807660  489418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:12:01.815372  489418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 20:12:01.823405  489418 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:12:01.823479  489418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:12:01.831306  489418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 20:12:01.839915  489418 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:12:01.840039  489418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:12:01.847907  489418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 20:12:01.856062  489418 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:12:01.856154  489418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:12:01.863712  489418 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:12:01.907549  489418 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 20:12:01.907824  489418 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:12:01.930677  489418 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:12:01.930802  489418 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 20:12:01.930875  489418 kubeadm.go:319] OS: Linux
	I1217 20:12:01.930946  489418 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:12:01.931014  489418 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 20:12:01.931087  489418 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:12:01.931159  489418 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:12:01.931235  489418 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:12:01.931306  489418 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:12:01.931377  489418 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:12:01.931451  489418 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:12:01.931526  489418 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 20:12:02.000828  489418 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:12:02.000997  489418 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:12:02.001099  489418 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:12:02.012485  489418 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:12:02.018302  489418 out.go:252]   - Generating certificates and keys ...
	I1217 20:12:02.018456  489418 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:12:02.018564  489418 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:12:03.039164  489418 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 20:12:03.502741  489418 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 20:12:04.229236  489418 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 20:12:05.338977  489418 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 20:12:05.744696  489418 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 20:12:05.745011  489418 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-052340 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 20:12:06.120704  489418 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 20:12:06.121220  489418 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-052340 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 20:12:06.276842  489418 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 20:12:06.727242  489418 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 20:12:07.588433  489418 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 20:12:07.588899  489418 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:12:09.083503  489418 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:12:09.687572  489418 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:12:09.972998  489418 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:12:10.209902  489418 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:12:10.928010  489418 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:12:10.929165  489418 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:12:10.932103  489418 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 20:12:10.935656  489418 out.go:252]   - Booting up control plane ...
	I1217 20:12:10.935765  489418 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:12:10.935845  489418 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:12:10.936742  489418 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:12:10.952499  489418 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:12:10.952793  489418 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:12:10.960721  489418 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:12:10.961052  489418 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:12:10.961274  489418 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:12:11.085169  489418 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:12:11.085284  489418 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:12:12.086449  489418 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001573378s
	I1217 20:12:12.090060  489418 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 20:12:12.090152  489418 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1217 20:12:12.090236  489418 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 20:12:12.090309  489418 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 20:12:15.458982  489418 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.368338217s
	I1217 20:12:17.408923  489418 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.318821203s
	I1217 20:12:18.093014  489418 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002769609s
	I1217 20:12:18.130532  489418 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 20:12:18.155175  489418 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 20:12:18.179902  489418 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 20:12:18.180116  489418 kubeadm.go:319] [mark-control-plane] Marking the node addons-052340 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 20:12:18.201337  489418 kubeadm.go:319] [bootstrap-token] Using token: o0jkvy.oy99iv7pltt4di17
	I1217 20:12:18.206345  489418 out.go:252]   - Configuring RBAC rules ...
	I1217 20:12:18.206473  489418 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 20:12:18.216302  489418 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 20:12:18.228312  489418 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 20:12:18.233821  489418 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 20:12:18.238719  489418 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 20:12:18.246557  489418 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 20:12:18.510360  489418 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 20:12:18.995131  489418 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 20:12:19.504224  489418 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 20:12:19.504244  489418 kubeadm.go:319] 
	I1217 20:12:19.504305  489418 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 20:12:19.504326  489418 kubeadm.go:319] 
	I1217 20:12:19.504403  489418 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 20:12:19.504407  489418 kubeadm.go:319] 
	I1217 20:12:19.504432  489418 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 20:12:19.504493  489418 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 20:12:19.504543  489418 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 20:12:19.504548  489418 kubeadm.go:319] 
	I1217 20:12:19.504602  489418 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 20:12:19.504622  489418 kubeadm.go:319] 
	I1217 20:12:19.504669  489418 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 20:12:19.504673  489418 kubeadm.go:319] 
	I1217 20:12:19.504730  489418 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 20:12:19.504809  489418 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 20:12:19.504878  489418 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 20:12:19.504882  489418 kubeadm.go:319] 
	I1217 20:12:19.504967  489418 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 20:12:19.505043  489418 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 20:12:19.505047  489418 kubeadm.go:319] 
	I1217 20:12:19.505130  489418 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token o0jkvy.oy99iv7pltt4di17 \
	I1217 20:12:19.505233  489418 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8f40ab2bade0ae5c3450e7595a76f8b890ef62a258572dfbcace94aca819ea89 \
	I1217 20:12:19.505253  489418 kubeadm.go:319] 	--control-plane 
	I1217 20:12:19.505257  489418 kubeadm.go:319] 
	I1217 20:12:19.505351  489418 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 20:12:19.505356  489418 kubeadm.go:319] 
	I1217 20:12:19.505438  489418 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token o0jkvy.oy99iv7pltt4di17 \
	I1217 20:12:19.505540  489418 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8f40ab2bade0ae5c3450e7595a76f8b890ef62a258572dfbcace94aca819ea89 
	I1217 20:12:19.508615  489418 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1217 20:12:19.508834  489418 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 20:12:19.508938  489418 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 20:12:19.508957  489418 cni.go:84] Creating CNI manager for ""
	I1217 20:12:19.508964  489418 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:12:19.512174  489418 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 20:12:19.515064  489418 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 20:12:19.519210  489418 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 20:12:19.519231  489418 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 20:12:19.532786  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 20:12:19.830186  489418 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 20:12:19.830375  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:12:19.830508  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-052340 minikube.k8s.io/updated_at=2025_12_17T20_12_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869 minikube.k8s.io/name=addons-052340 minikube.k8s.io/primary=true
	I1217 20:12:19.987342  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:12:19.987408  489418 ops.go:34] apiserver oom_adj: -16
	I1217 20:12:20.487524  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:12:20.987772  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:12:21.488445  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:12:21.987565  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:12:22.488136  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:12:22.987433  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:12:23.487456  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:12:23.988272  489418 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:12:24.114766  489418 kubeadm.go:1114] duration metric: took 4.2844551s to wait for elevateKubeSystemPrivileges
	I1217 20:12:24.114794  489418 kubeadm.go:403] duration metric: took 22.370265885s to StartCluster
	I1217 20:12:24.114811  489418 settings.go:142] acquiring lock: {Name:mk91fac3a58d91b836cd0701183ef7cc1e571672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:24.114926  489418 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:12:24.115293  489418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/kubeconfig: {Name:mkc7214551fa855d6d4575d627c3ecd54291b351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:12:24.115518  489418 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:12:24.115727  489418 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 20:12:24.116015  489418 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:12:24.116059  489418 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1217 20:12:24.116131  489418 addons.go:70] Setting yakd=true in profile "addons-052340"
	I1217 20:12:24.116144  489418 addons.go:239] Setting addon yakd=true in "addons-052340"
	I1217 20:12:24.116168  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.116682  489418 addons.go:70] Setting inspektor-gadget=true in profile "addons-052340"
	I1217 20:12:24.116695  489418 addons.go:239] Setting addon inspektor-gadget=true in "addons-052340"
	I1217 20:12:24.116715  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.117113  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.117545  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.117674  489418 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-052340"
	I1217 20:12:24.117687  489418 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-052340"
	I1217 20:12:24.117709  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.118160  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.119608  489418 addons.go:70] Setting metrics-server=true in profile "addons-052340"
	I1217 20:12:24.119678  489418 addons.go:239] Setting addon metrics-server=true in "addons-052340"
	I1217 20:12:24.119766  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.120278  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.122226  489418 addons.go:70] Setting cloud-spanner=true in profile "addons-052340"
	I1217 20:12:24.122262  489418 addons.go:239] Setting addon cloud-spanner=true in "addons-052340"
	I1217 20:12:24.122302  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.122781  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.131326  489418 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-052340"
	I1217 20:12:24.131404  489418 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-052340"
	I1217 20:12:24.131435  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.131753  489418 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-052340"
	I1217 20:12:24.131775  489418 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-052340"
	I1217 20:12:24.131802  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.131956  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.132218  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.133027  489418 addons.go:70] Setting registry=true in profile "addons-052340"
	I1217 20:12:24.133052  489418 addons.go:239] Setting addon registry=true in "addons-052340"
	I1217 20:12:24.133089  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.133550  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.147521  489418 addons.go:70] Setting default-storageclass=true in profile "addons-052340"
	I1217 20:12:24.147554  489418 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-052340"
	I1217 20:12:24.147944  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.153537  489418 addons.go:70] Setting registry-creds=true in profile "addons-052340"
	I1217 20:12:24.153574  489418 addons.go:239] Setting addon registry-creds=true in "addons-052340"
	I1217 20:12:24.153613  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.154154  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.175560  489418 addons.go:70] Setting storage-provisioner=true in profile "addons-052340"
	I1217 20:12:24.175607  489418 addons.go:239] Setting addon storage-provisioner=true in "addons-052340"
	I1217 20:12:24.175651  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.176301  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.176916  489418 addons.go:70] Setting gcp-auth=true in profile "addons-052340"
	I1217 20:12:24.176945  489418 mustload.go:66] Loading cluster: addons-052340
	I1217 20:12:24.177127  489418 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:12:24.177393  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.207840  489418 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-052340"
	I1217 20:12:24.207883  489418 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-052340"
	I1217 20:12:24.208259  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.218462  489418 addons.go:70] Setting ingress=true in profile "addons-052340"
	I1217 20:12:24.218497  489418 addons.go:239] Setting addon ingress=true in "addons-052340"
	I1217 20:12:24.218550  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.219052  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.237520  489418 addons.go:70] Setting volcano=true in profile "addons-052340"
	I1217 20:12:24.237555  489418 addons.go:239] Setting addon volcano=true in "addons-052340"
	I1217 20:12:24.237597  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.238103  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.242039  489418 out.go:179] * Verifying Kubernetes components...
	I1217 20:12:24.242278  489418 addons.go:70] Setting ingress-dns=true in profile "addons-052340"
	I1217 20:12:24.242315  489418 addons.go:239] Setting addon ingress-dns=true in "addons-052340"
	I1217 20:12:24.242365  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.242939  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.269486  489418 addons.go:70] Setting volumesnapshots=true in profile "addons-052340"
	I1217 20:12:24.269527  489418 addons.go:239] Setting addon volumesnapshots=true in "addons-052340"
	I1217 20:12:24.269563  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.270154  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.377768  489418 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1217 20:12:24.383749  489418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:12:24.432459  489418 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 20:12:24.432545  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1217 20:12:24.432654  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.435688  489418 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1217 20:12:24.450789  489418 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1217 20:12:24.450855  489418 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1217 20:12:24.450953  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.475146  489418 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1217 20:12:24.480746  489418 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1217 20:12:24.480984  489418 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1217 20:12:24.481140  489418 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 20:12:24.481175  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1217 20:12:24.481311  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.487768  489418 addons.go:239] Setting addon default-storageclass=true in "addons-052340"
	I1217 20:12:24.487867  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.488458  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.510912  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.520741  489418 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1217 20:12:24.524254  489418 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1217 20:12:24.524597  489418 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 20:12:24.524658  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1217 20:12:24.524781  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.529229  489418 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1217 20:12:24.529360  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1217 20:12:24.529472  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	W1217 20:12:24.571281  489418 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1217 20:12:24.579858  489418 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1217 20:12:24.579880  489418 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1217 20:12:24.579952  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.580592  489418 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1217 20:12:24.606691  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1217 20:12:24.606774  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.580601  489418 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1217 20:12:24.580605  489418 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1217 20:12:24.588486  489418 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-052340"
	I1217 20:12:24.614009  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:24.614533  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:24.626309  489418 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1217 20:12:24.631658  489418 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1217 20:12:24.631695  489418 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1217 20:12:24.631796  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.637183  489418 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1217 20:12:24.640212  489418 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1217 20:12:24.644412  489418 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 20:12:24.652744  489418 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 20:12:24.654427  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.659733  489418 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 20:12:24.659796  489418 out.go:179]   - Using image docker.io/registry:3.0.0
	I1217 20:12:24.680752  489418 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1217 20:12:24.681030  489418 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 20:12:24.688041  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1217 20:12:24.688190  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.688497  489418 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:12:24.688510  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 20:12:24.688563  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.701094  489418 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 20:12:24.701127  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1217 20:12:24.701192  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.681039  489418 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1217 20:12:24.709803  489418 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1217 20:12:24.711667  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.715452  489418 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1217 20:12:24.716549  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.717389  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.718423  489418 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1217 20:12:24.718450  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1217 20:12:24.718526  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.726891  489418 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1217 20:12:24.733590  489418 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1217 20:12:24.737101  489418 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1217 20:12:24.743849  489418 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1217 20:12:24.743880  489418 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1217 20:12:24.743977  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.760660  489418 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:12:24.760686  489418 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:12:24.760756  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.792673  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.830120  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.837789  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.857980  489418 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1217 20:12:24.861246  489418 out.go:179]   - Using image docker.io/busybox:stable
	I1217 20:12:24.864927  489418 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 20:12:24.864952  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1217 20:12:24.865034  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:24.869142  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.901243  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.914832  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.923573  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.941371  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	W1217 20:12:24.943745  489418 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1217 20:12:24.943795  489418 retry.go:31] will retry after 207.94738ms: ssh: handshake failed: EOF
	I1217 20:12:24.963123  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.966477  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:24.972867  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:25.158190  489418 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.042427861s)
	I1217 20:12:25.158393  489418 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:12:25.158592  489418 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 20:12:25.535488  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:12:25.628816  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 20:12:25.828941  489418 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1217 20:12:25.828972  489418 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1217 20:12:25.918108  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 20:12:25.950284  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 20:12:25.984086  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:12:26.020764  489418 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1217 20:12:26.020791  489418 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1217 20:12:26.034360  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 20:12:26.065160  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 20:12:26.065839  489418 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1217 20:12:26.065860  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1217 20:12:26.069499  489418 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1217 20:12:26.069518  489418 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1217 20:12:26.109792  489418 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1217 20:12:26.109813  489418 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1217 20:12:26.123513  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1217 20:12:26.225000  489418 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1217 20:12:26.225071  489418 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1217 20:12:26.230171  489418 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1217 20:12:26.230255  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1217 20:12:26.236936  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1217 20:12:26.320631  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 20:12:26.378425  489418 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1217 20:12:26.378505  489418 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1217 20:12:26.383158  489418 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1217 20:12:26.383178  489418 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1217 20:12:26.433406  489418 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1217 20:12:26.433434  489418 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1217 20:12:26.487684  489418 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1217 20:12:26.487706  489418 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1217 20:12:26.535162  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1217 20:12:26.632456  489418 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 20:12:26.632532  489418 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1217 20:12:26.698955  489418 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1217 20:12:26.699036  489418 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1217 20:12:26.758038  489418 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1217 20:12:26.758122  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1217 20:12:26.786227  489418 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1217 20:12:26.786313  489418 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1217 20:12:26.959034  489418 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1217 20:12:26.959105  489418 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1217 20:12:27.024763  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 20:12:27.146438  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1217 20:12:27.179071  489418 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1217 20:12:27.179169  489418 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1217 20:12:27.407739  489418 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 20:12:27.407769  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1217 20:12:27.475978  489418 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1217 20:12:27.476015  489418 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1217 20:12:27.769271  489418 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1217 20:12:27.769356  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1217 20:12:27.809971  489418 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.651490952s)
	I1217 20:12:27.810054  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.274533471s)
	I1217 20:12:27.810291  489418 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.651660562s)
	I1217 20:12:27.810313  489418 start.go:1013] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1217 20:12:27.811020  489418 node_ready.go:35] waiting up to 6m0s for node "addons-052340" to be "Ready" ...
	I1217 20:12:27.914010  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 20:12:28.154472  489418 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1217 20:12:28.154549  489418 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1217 20:12:28.286028  489418 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1217 20:12:28.286104  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1217 20:12:28.316547  489418 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-052340" context rescaled to 1 replicas
	I1217 20:12:28.394547  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.765693593s)
	I1217 20:12:28.539812  489418 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1217 20:12:28.539912  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1217 20:12:28.734483  489418 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 20:12:28.734565  489418 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1217 20:12:28.890316  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1217 20:12:29.825641  489418 node_ready.go:57] node "addons-052340" has "Ready":"False" status (will retry)
	I1217 20:12:31.832551  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.914404802s)
	I1217 20:12:31.832639  489418 addons.go:495] Verifying addon ingress=true in "addons-052340"
	I1217 20:12:31.832991  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.798604599s)
	I1217 20:12:31.833063  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.767881451s)
	I1217 20:12:31.832833  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.848722288s)
	I1217 20:12:31.832673  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.882363583s)
	I1217 20:12:31.833226  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.709536985s)
	I1217 20:12:31.833299  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.596266496s)
	I1217 20:12:31.833385  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.512676102s)
	I1217 20:12:31.833462  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.298231003s)
	I1217 20:12:31.833481  489418 addons.go:495] Verifying addon registry=true in "addons-052340"
	I1217 20:12:31.833955  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.809112804s)
	I1217 20:12:31.833974  489418 addons.go:495] Verifying addon metrics-server=true in "addons-052340"
	I1217 20:12:31.834012  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.687487717s)
	I1217 20:12:31.834178  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.920088255s)
	W1217 20:12:31.834379  489418 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 20:12:31.834400  489418 retry.go:31] will retry after 296.854856ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 20:12:31.836380  489418 out.go:179] * Verifying registry addon...
	I1217 20:12:31.838467  489418 out.go:179] * Verifying ingress addon...
	I1217 20:12:31.840356  489418 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-052340 service yakd-dashboard -n yakd-dashboard
	
	I1217 20:12:31.842911  489418 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1217 20:12:31.842985  489418 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1217 20:12:31.849910  489418 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 20:12:31.849940  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:31.850347  489418 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1217 20:12:31.850369  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:32.131833  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 20:12:32.140524  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.250102119s)
	I1217 20:12:32.140570  489418 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-052340"
	I1217 20:12:32.143414  489418 out.go:179] * Verifying csi-hostpath-driver addon...
	I1217 20:12:32.146863  489418 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1217 20:12:32.173288  489418 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 20:12:32.173312  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:32.186326  489418 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1217 20:12:32.186423  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:32.213786  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	W1217 20:12:32.314209  489418 node_ready.go:57] node "addons-052340" has "Ready":"False" status (will retry)
	I1217 20:12:32.349144  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:32.349362  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:32.357941  489418 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1217 20:12:32.371421  489418 addons.go:239] Setting addon gcp-auth=true in "addons-052340"
	I1217 20:12:32.371482  489418 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:12:32.372016  489418 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:12:32.392763  489418 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1217 20:12:32.392845  489418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:12:32.417306  489418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:12:32.650598  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:32.847050  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:32.847643  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:33.151175  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:33.346152  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:33.346329  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:33.650291  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:33.846022  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:33.846261  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:34.150766  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1217 20:12:34.314684  489418 node_ready.go:57] node "addons-052340" has "Ready":"False" status (will retry)
	I1217 20:12:34.347269  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:34.347306  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:34.651208  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:34.850841  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:34.851009  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:34.883964  489418 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.752089686s)
	I1217 20:12:34.884043  489418 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.491240083s)
	I1217 20:12:34.887382  489418 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 20:12:34.890482  489418 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1217 20:12:34.893357  489418 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1217 20:12:34.893385  489418 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1217 20:12:34.907921  489418 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1217 20:12:34.907945  489418 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1217 20:12:34.922197  489418 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 20:12:34.922253  489418 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1217 20:12:34.936840  489418 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 20:12:35.150996  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:35.350048  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:35.350711  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:35.432285  489418 addons.go:495] Verifying addon gcp-auth=true in "addons-052340"
	I1217 20:12:35.435809  489418 out.go:179] * Verifying gcp-auth addon...
	I1217 20:12:35.439489  489418 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1217 20:12:35.447621  489418 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1217 20:12:35.447696  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:35.650352  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:35.847322  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:35.847388  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:35.943389  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:36.151079  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:36.346613  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:36.346771  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:36.443427  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:36.650382  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1217 20:12:36.814586  489418 node_ready.go:57] node "addons-052340" has "Ready":"False" status (will retry)
	I1217 20:12:36.847104  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:36.847350  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:36.943506  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:37.150798  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:37.347519  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:37.347723  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:37.442873  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:37.649902  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:37.846226  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:37.846726  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:37.942745  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:38.151722  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:38.346766  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:38.347231  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:38.443328  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:38.678283  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:38.814852  489418 node_ready.go:49] node "addons-052340" is "Ready"
	I1217 20:12:38.814893  489418 node_ready.go:38] duration metric: took 11.003668139s for node "addons-052340" to be "Ready" ...
	I1217 20:12:38.814908  489418 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:12:38.814976  489418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:12:38.838679  489418 api_server.go:72] duration metric: took 14.723132287s to wait for apiserver process to appear ...
	I1217 20:12:38.838756  489418 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:12:38.838790  489418 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:12:38.857182  489418 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1217 20:12:38.889377  489418 api_server.go:141] control plane version: v1.34.3
	I1217 20:12:38.889461  489418 api_server.go:131] duration metric: took 50.683327ms to wait for apiserver health ...
	I1217 20:12:38.889486  489418 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:12:39.026419  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:39.026899  489418 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 20:12:39.026961  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:39.029015  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:39.030006  489418 system_pods.go:59] 19 kube-system pods found
	I1217 20:12:39.030080  489418 system_pods.go:61] "coredns-66bc5c9577-gnsjt" [6b5581fa-9c20-4809-acec-fdb941b96e8b] Pending
	I1217 20:12:39.030106  489418 system_pods.go:61] "csi-hostpath-attacher-0" [cffdfcc3-5be6-49bd-bbf0-f55a8fd74835] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 20:12:39.030147  489418 system_pods.go:61] "csi-hostpath-resizer-0" [0f0ab7ea-fb94-4093-ace4-699ac63bc501] Pending
	I1217 20:12:39.030173  489418 system_pods.go:61] "csi-hostpathplugin-r5tvz" [5fad484d-ec05-403c-98fe-d17423ad7823] Pending
	I1217 20:12:39.030193  489418 system_pods.go:61] "etcd-addons-052340" [811b19da-296f-4111-9c99-99e6e1747656] Running
	I1217 20:12:39.030213  489418 system_pods.go:61] "kindnet-sk69j" [0fd139f6-2369-4190-8538-13b4246cd1be] Running
	I1217 20:12:39.030233  489418 system_pods.go:61] "kube-apiserver-addons-052340" [84f7271a-33ae-4ce7-9678-e689706c3875] Running
	I1217 20:12:39.030262  489418 system_pods.go:61] "kube-controller-manager-addons-052340" [8fce3e2e-9d80-4002-962b-849b820175b1] Running
	I1217 20:12:39.030287  489418 system_pods.go:61] "kube-ingress-dns-minikube" [dd873af3-c7ba-4c3e-966a-558145fdd163] Pending
	I1217 20:12:39.030307  489418 system_pods.go:61] "kube-proxy-k6bpd" [caf1f177-b1c9-47ca-a5bf-6dca0b3cb333] Running
	I1217 20:12:39.030328  489418 system_pods.go:61] "kube-scheduler-addons-052340" [1a342705-94c6-49ff-9b2c-03f3bd0de227] Running
	I1217 20:12:39.030350  489418 system_pods.go:61] "metrics-server-85b7d694d7-5g267" [5948d9f5-4c59-4b54-9f3e-7fe6fe4859c0] Pending
	I1217 20:12:39.030385  489418 system_pods.go:61] "nvidia-device-plugin-daemonset-b7cpw" [f3e2f332-3129-49a8-8cb2-5dedaf7c64e2] Pending
	I1217 20:12:39.030403  489418 system_pods.go:61] "registry-6b586f9694-h2xmf" [2534706e-f0bb-4990-85ac-495c0ace51cc] Pending
	I1217 20:12:39.030423  489418 system_pods.go:61] "registry-creds-764b6fb674-4s27d" [dc923fe5-3d2f-4a8f-b089-bf8bf7d8040a] Pending
	I1217 20:12:39.030446  489418 system_pods.go:61] "registry-proxy-5q5m2" [ae7a6b86-2960-405a-9d77-ff957fe9411a] Pending
	I1217 20:12:39.030477  489418 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4pf8h" [cfb30b47-2503-495d-b90c-9ae6471c5748] Pending
	I1217 20:12:39.030501  489418 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7528t" [430605c6-5bc2-4f95-8c7c-d69c2da0556f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 20:12:39.030527  489418 system_pods.go:61] "storage-provisioner" [c2c69dbe-6663-429e-8e32-55d14709cd6e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:12:39.030563  489418 system_pods.go:74] duration metric: took 141.057039ms to wait for pod list to return data ...
	I1217 20:12:39.030591  489418 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:12:39.096187  489418 default_sa.go:45] found service account: "default"
	I1217 20:12:39.096258  489418 default_sa.go:55] duration metric: took 65.645572ms for default service account to be created ...
	I1217 20:12:39.096305  489418 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 20:12:39.125460  489418 system_pods.go:86] 19 kube-system pods found
	I1217 20:12:39.131890  489418 system_pods.go:89] "coredns-66bc5c9577-gnsjt" [6b5581fa-9c20-4809-acec-fdb941b96e8b] Pending
	I1217 20:12:39.132339  489418 system_pods.go:89] "csi-hostpath-attacher-0" [cffdfcc3-5be6-49bd-bbf0-f55a8fd74835] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 20:12:39.132354  489418 system_pods.go:89] "csi-hostpath-resizer-0" [0f0ab7ea-fb94-4093-ace4-699ac63bc501] Pending
	I1217 20:12:39.132362  489418 system_pods.go:89] "csi-hostpathplugin-r5tvz" [5fad484d-ec05-403c-98fe-d17423ad7823] Pending
	I1217 20:12:39.132366  489418 system_pods.go:89] "etcd-addons-052340" [811b19da-296f-4111-9c99-99e6e1747656] Running
	I1217 20:12:39.132371  489418 system_pods.go:89] "kindnet-sk69j" [0fd139f6-2369-4190-8538-13b4246cd1be] Running
	I1217 20:12:39.132376  489418 system_pods.go:89] "kube-apiserver-addons-052340" [84f7271a-33ae-4ce7-9678-e689706c3875] Running
	I1217 20:12:39.132382  489418 system_pods.go:89] "kube-controller-manager-addons-052340" [8fce3e2e-9d80-4002-962b-849b820175b1] Running
	I1217 20:12:39.132387  489418 system_pods.go:89] "kube-ingress-dns-minikube" [dd873af3-c7ba-4c3e-966a-558145fdd163] Pending
	I1217 20:12:39.132391  489418 system_pods.go:89] "kube-proxy-k6bpd" [caf1f177-b1c9-47ca-a5bf-6dca0b3cb333] Running
	I1217 20:12:39.132395  489418 system_pods.go:89] "kube-scheduler-addons-052340" [1a342705-94c6-49ff-9b2c-03f3bd0de227] Running
	I1217 20:12:39.132400  489418 system_pods.go:89] "metrics-server-85b7d694d7-5g267" [5948d9f5-4c59-4b54-9f3e-7fe6fe4859c0] Pending
	I1217 20:12:39.132404  489418 system_pods.go:89] "nvidia-device-plugin-daemonset-b7cpw" [f3e2f332-3129-49a8-8cb2-5dedaf7c64e2] Pending
	I1217 20:12:39.132408  489418 system_pods.go:89] "registry-6b586f9694-h2xmf" [2534706e-f0bb-4990-85ac-495c0ace51cc] Pending
	I1217 20:12:39.132412  489418 system_pods.go:89] "registry-creds-764b6fb674-4s27d" [dc923fe5-3d2f-4a8f-b089-bf8bf7d8040a] Pending
	I1217 20:12:39.132416  489418 system_pods.go:89] "registry-proxy-5q5m2" [ae7a6b86-2960-405a-9d77-ff957fe9411a] Pending
	I1217 20:12:39.132420  489418 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4pf8h" [cfb30b47-2503-495d-b90c-9ae6471c5748] Pending
	I1217 20:12:39.132427  489418 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7528t" [430605c6-5bc2-4f95-8c7c-d69c2da0556f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 20:12:39.132433  489418 system_pods.go:89] "storage-provisioner" [c2c69dbe-6663-429e-8e32-55d14709cd6e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:12:39.132450  489418 retry.go:31] will retry after 275.720911ms: missing components: kube-dns
	I1217 20:12:39.190405  489418 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 20:12:39.190503  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:39.366234  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:39.369177  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:39.419193  489418 system_pods.go:86] 19 kube-system pods found
	I1217 20:12:39.419286  489418 system_pods.go:89] "coredns-66bc5c9577-gnsjt" [6b5581fa-9c20-4809-acec-fdb941b96e8b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:12:39.419313  489418 system_pods.go:89] "csi-hostpath-attacher-0" [cffdfcc3-5be6-49bd-bbf0-f55a8fd74835] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 20:12:39.419352  489418 system_pods.go:89] "csi-hostpath-resizer-0" [0f0ab7ea-fb94-4093-ace4-699ac63bc501] Pending
	I1217 20:12:39.419380  489418 system_pods.go:89] "csi-hostpathplugin-r5tvz" [5fad484d-ec05-403c-98fe-d17423ad7823] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 20:12:39.419401  489418 system_pods.go:89] "etcd-addons-052340" [811b19da-296f-4111-9c99-99e6e1747656] Running
	I1217 20:12:39.419423  489418 system_pods.go:89] "kindnet-sk69j" [0fd139f6-2369-4190-8538-13b4246cd1be] Running
	I1217 20:12:39.419455  489418 system_pods.go:89] "kube-apiserver-addons-052340" [84f7271a-33ae-4ce7-9678-e689706c3875] Running
	I1217 20:12:39.419476  489418 system_pods.go:89] "kube-controller-manager-addons-052340" [8fce3e2e-9d80-4002-962b-849b820175b1] Running
	I1217 20:12:39.419496  489418 system_pods.go:89] "kube-ingress-dns-minikube" [dd873af3-c7ba-4c3e-966a-558145fdd163] Pending
	I1217 20:12:39.419515  489418 system_pods.go:89] "kube-proxy-k6bpd" [caf1f177-b1c9-47ca-a5bf-6dca0b3cb333] Running
	I1217 20:12:39.419536  489418 system_pods.go:89] "kube-scheduler-addons-052340" [1a342705-94c6-49ff-9b2c-03f3bd0de227] Running
	I1217 20:12:39.419565  489418 system_pods.go:89] "metrics-server-85b7d694d7-5g267" [5948d9f5-4c59-4b54-9f3e-7fe6fe4859c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 20:12:39.419710  489418 system_pods.go:89] "nvidia-device-plugin-daemonset-b7cpw" [f3e2f332-3129-49a8-8cb2-5dedaf7c64e2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 20:12:39.419737  489418 system_pods.go:89] "registry-6b586f9694-h2xmf" [2534706e-f0bb-4990-85ac-495c0ace51cc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 20:12:39.419759  489418 system_pods.go:89] "registry-creds-764b6fb674-4s27d" [dc923fe5-3d2f-4a8f-b089-bf8bf7d8040a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 20:12:39.419793  489418 system_pods.go:89] "registry-proxy-5q5m2" [ae7a6b86-2960-405a-9d77-ff957fe9411a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 20:12:39.419823  489418 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4pf8h" [cfb30b47-2503-495d-b90c-9ae6471c5748] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 20:12:39.419847  489418 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7528t" [430605c6-5bc2-4f95-8c7c-d69c2da0556f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 20:12:39.419872  489418 system_pods.go:89] "storage-provisioner" [c2c69dbe-6663-429e-8e32-55d14709cd6e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:12:39.419917  489418 retry.go:31] will retry after 389.121722ms: missing components: kube-dns
	I1217 20:12:39.447139  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:39.651399  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:39.840754  489418 system_pods.go:86] 19 kube-system pods found
	I1217 20:12:39.840848  489418 system_pods.go:89] "coredns-66bc5c9577-gnsjt" [6b5581fa-9c20-4809-acec-fdb941b96e8b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:12:39.840877  489418 system_pods.go:89] "csi-hostpath-attacher-0" [cffdfcc3-5be6-49bd-bbf0-f55a8fd74835] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 20:12:39.840917  489418 system_pods.go:89] "csi-hostpath-resizer-0" [0f0ab7ea-fb94-4093-ace4-699ac63bc501] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 20:12:39.840948  489418 system_pods.go:89] "csi-hostpathplugin-r5tvz" [5fad484d-ec05-403c-98fe-d17423ad7823] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 20:12:39.840970  489418 system_pods.go:89] "etcd-addons-052340" [811b19da-296f-4111-9c99-99e6e1747656] Running
	I1217 20:12:39.840992  489418 system_pods.go:89] "kindnet-sk69j" [0fd139f6-2369-4190-8538-13b4246cd1be] Running
	I1217 20:12:39.841028  489418 system_pods.go:89] "kube-apiserver-addons-052340" [84f7271a-33ae-4ce7-9678-e689706c3875] Running
	I1217 20:12:39.841053  489418 system_pods.go:89] "kube-controller-manager-addons-052340" [8fce3e2e-9d80-4002-962b-849b820175b1] Running
	I1217 20:12:39.841077  489418 system_pods.go:89] "kube-ingress-dns-minikube" [dd873af3-c7ba-4c3e-966a-558145fdd163] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 20:12:39.841098  489418 system_pods.go:89] "kube-proxy-k6bpd" [caf1f177-b1c9-47ca-a5bf-6dca0b3cb333] Running
	I1217 20:12:39.841133  489418 system_pods.go:89] "kube-scheduler-addons-052340" [1a342705-94c6-49ff-9b2c-03f3bd0de227] Running
	I1217 20:12:39.841157  489418 system_pods.go:89] "metrics-server-85b7d694d7-5g267" [5948d9f5-4c59-4b54-9f3e-7fe6fe4859c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 20:12:39.841176  489418 system_pods.go:89] "nvidia-device-plugin-daemonset-b7cpw" [f3e2f332-3129-49a8-8cb2-5dedaf7c64e2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 20:12:39.841198  489418 system_pods.go:89] "registry-6b586f9694-h2xmf" [2534706e-f0bb-4990-85ac-495c0ace51cc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 20:12:39.841243  489418 system_pods.go:89] "registry-creds-764b6fb674-4s27d" [dc923fe5-3d2f-4a8f-b089-bf8bf7d8040a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 20:12:39.841268  489418 system_pods.go:89] "registry-proxy-5q5m2" [ae7a6b86-2960-405a-9d77-ff957fe9411a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 20:12:39.841290  489418 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4pf8h" [cfb30b47-2503-495d-b90c-9ae6471c5748] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 20:12:39.841313  489418 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7528t" [430605c6-5bc2-4f95-8c7c-d69c2da0556f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 20:12:39.841355  489418 system_pods.go:89] "storage-provisioner" [c2c69dbe-6663-429e-8e32-55d14709cd6e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:12:39.841392  489418 retry.go:31] will retry after 474.900694ms: missing components: kube-dns
	I1217 20:12:39.933174  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:39.933792  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:39.944811  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:40.150602  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:40.335965  489418 system_pods.go:86] 19 kube-system pods found
	I1217 20:12:40.336043  489418 system_pods.go:89] "coredns-66bc5c9577-gnsjt" [6b5581fa-9c20-4809-acec-fdb941b96e8b] Running
	I1217 20:12:40.336069  489418 system_pods.go:89] "csi-hostpath-attacher-0" [cffdfcc3-5be6-49bd-bbf0-f55a8fd74835] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 20:12:40.336089  489418 system_pods.go:89] "csi-hostpath-resizer-0" [0f0ab7ea-fb94-4093-ace4-699ac63bc501] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 20:12:40.336132  489418 system_pods.go:89] "csi-hostpathplugin-r5tvz" [5fad484d-ec05-403c-98fe-d17423ad7823] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 20:12:40.336161  489418 system_pods.go:89] "etcd-addons-052340" [811b19da-296f-4111-9c99-99e6e1747656] Running
	I1217 20:12:40.336181  489418 system_pods.go:89] "kindnet-sk69j" [0fd139f6-2369-4190-8538-13b4246cd1be] Running
	I1217 20:12:40.336200  489418 system_pods.go:89] "kube-apiserver-addons-052340" [84f7271a-33ae-4ce7-9678-e689706c3875] Running
	I1217 20:12:40.336219  489418 system_pods.go:89] "kube-controller-manager-addons-052340" [8fce3e2e-9d80-4002-962b-849b820175b1] Running
	I1217 20:12:40.336250  489418 system_pods.go:89] "kube-ingress-dns-minikube" [dd873af3-c7ba-4c3e-966a-558145fdd163] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 20:12:40.336274  489418 system_pods.go:89] "kube-proxy-k6bpd" [caf1f177-b1c9-47ca-a5bf-6dca0b3cb333] Running
	I1217 20:12:40.336295  489418 system_pods.go:89] "kube-scheduler-addons-052340" [1a342705-94c6-49ff-9b2c-03f3bd0de227] Running
	I1217 20:12:40.336317  489418 system_pods.go:89] "metrics-server-85b7d694d7-5g267" [5948d9f5-4c59-4b54-9f3e-7fe6fe4859c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 20:12:40.336349  489418 system_pods.go:89] "nvidia-device-plugin-daemonset-b7cpw" [f3e2f332-3129-49a8-8cb2-5dedaf7c64e2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 20:12:40.336380  489418 system_pods.go:89] "registry-6b586f9694-h2xmf" [2534706e-f0bb-4990-85ac-495c0ace51cc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 20:12:40.336400  489418 system_pods.go:89] "registry-creds-764b6fb674-4s27d" [dc923fe5-3d2f-4a8f-b089-bf8bf7d8040a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 20:12:40.336422  489418 system_pods.go:89] "registry-proxy-5q5m2" [ae7a6b86-2960-405a-9d77-ff957fe9411a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 20:12:40.336454  489418 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4pf8h" [cfb30b47-2503-495d-b90c-9ae6471c5748] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 20:12:40.336481  489418 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7528t" [430605c6-5bc2-4f95-8c7c-d69c2da0556f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 20:12:40.336501  489418 system_pods.go:89] "storage-provisioner" [c2c69dbe-6663-429e-8e32-55d14709cd6e] Running
	I1217 20:12:40.336527  489418 system_pods.go:126] duration metric: took 1.240188101s to wait for k8s-apps to be running ...
	I1217 20:12:40.336560  489418 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 20:12:40.336640  489418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:12:40.364834  489418 system_svc.go:56] duration metric: took 28.265502ms WaitForService to wait for kubelet
	I1217 20:12:40.364909  489418 kubeadm.go:587] duration metric: took 16.249366709s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:12:40.364947  489418 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:12:40.368168  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:40.368510  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:40.377143  489418 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:12:40.377239  489418 node_conditions.go:123] node cpu capacity is 2
	I1217 20:12:40.377269  489418 node_conditions.go:105] duration metric: took 12.300157ms to run NodePressure ...
	I1217 20:12:40.377308  489418 start.go:242] waiting for startup goroutines ...
	I1217 20:12:40.461000  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:40.650540  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:40.848172  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:40.849326  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:40.943393  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:41.158856  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:41.346928  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:41.348343  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:41.442897  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:41.650625  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:41.848429  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:41.848594  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:41.943312  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:42.152567  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:42.349007  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:42.349486  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:42.448896  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:42.649978  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:42.847395  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:42.847659  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:42.942965  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:43.151060  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:43.347038  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:43.347706  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:43.442687  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:43.650666  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:43.849744  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:43.850179  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:43.944282  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:44.151077  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:44.347958  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:44.348284  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:44.443419  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:44.651491  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:44.848034  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:44.848164  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:44.943099  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:45.151753  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:45.352604  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:45.353108  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:45.443702  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:45.651463  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:45.847104  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:45.847320  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:45.944106  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:46.151478  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:46.349257  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:46.349595  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:46.450369  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:46.650952  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:46.848493  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:46.848852  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:46.942788  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:47.152270  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:47.349840  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:47.349995  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:47.448649  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:47.650304  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:47.848118  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:47.848541  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:47.943325  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:48.151049  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:48.350209  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:48.351574  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:48.442953  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:48.650299  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:48.849064  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:48.851187  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:48.943518  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:49.151055  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:49.349246  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:49.354342  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:49.443188  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:49.650851  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:49.848219  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:49.848779  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:49.942829  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:50.150738  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:50.347825  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:50.348961  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:50.443340  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:50.650900  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:50.846888  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:50.847737  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:50.942896  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:51.150805  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:51.357199  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:51.357352  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:51.457215  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:51.655810  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:51.849524  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:51.850420  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:51.947195  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:52.151951  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:52.347556  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:52.348343  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:52.443915  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:52.651073  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:52.847888  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:52.848110  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:52.943192  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:53.151023  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:53.352415  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:53.352757  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:53.450437  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:53.652735  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:53.847619  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:53.847985  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:53.954213  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:54.150850  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:54.347742  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:54.347870  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:54.445160  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:54.652126  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:54.849334  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:54.850020  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:54.944190  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:55.150999  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:55.349544  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:55.349931  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:55.443634  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:55.652133  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:55.848378  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:55.849716  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:55.942686  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:56.151323  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:56.349225  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:56.350723  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:56.443250  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:56.651396  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:56.847157  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:56.847264  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:56.943108  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:57.150187  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:57.347540  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:57.347865  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:57.443527  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:57.650783  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:57.847319  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:57.847776  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:57.942974  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:58.151130  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:58.347825  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:58.348013  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:58.448655  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:58.651210  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:58.848036  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:58.848216  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:58.943144  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:59.150719  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:59.347089  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:59.347539  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:59.442889  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:12:59.650603  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:12:59.848493  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:12:59.848974  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:12:59.943039  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:00.222070  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:00.354138  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:00.354834  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:00.443965  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:00.650403  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:00.848112  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:00.848525  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:00.942715  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:01.150843  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:01.348136  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:01.348358  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:01.442783  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:01.650453  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:01.848037  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:01.848237  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:01.943523  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:02.154095  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:02.350534  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:02.351933  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:02.449664  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:02.651882  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:02.848603  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:02.849060  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:02.943389  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:03.151633  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:03.348505  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:03.348923  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:03.443358  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:03.651317  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:03.857031  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:03.858011  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:03.955251  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:04.150978  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:04.350235  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:04.350572  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:04.442898  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:04.650704  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:04.848624  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:04.849011  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:04.943199  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:05.150690  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:05.348123  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:05.348259  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:05.443382  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:05.651331  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:05.847401  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:05.847816  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:05.942754  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:06.151152  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:06.347341  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:06.347441  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:06.447883  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:06.650764  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:06.846878  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:06.847047  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:06.943780  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:07.150712  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:07.347437  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:07.347666  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:07.442715  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:07.650739  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:07.846281  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:07.847610  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:07.942627  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:08.151768  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:08.346349  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:08.346636  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:08.442740  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:08.651871  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:08.847218  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:08.847435  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:08.943172  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:09.150519  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:09.347900  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:09.348030  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:09.443314  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:09.651164  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:09.847881  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:09.848150  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:09.943209  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:10.151144  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:10.348909  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:10.349356  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:10.443211  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:10.651759  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:10.849338  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:10.849839  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:10.943053  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:11.150832  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:11.346722  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:11.346872  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:11.443335  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:11.651076  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:11.846557  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:11.847017  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:11.943109  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:12.150523  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:12.347339  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:12.348015  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:12.443161  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:12.650957  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:12.846503  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:12.846733  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:12.942698  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:13.150558  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:13.347067  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:13.347116  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:13.442912  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:13.650398  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:13.847084  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:13.847093  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:13.944215  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:14.150900  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:14.346477  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:14.346619  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:14.442585  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:14.651121  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:14.846579  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:14.847024  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:14.942730  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:15.150313  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:15.346354  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:15.346491  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:15.442431  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:15.650516  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:15.847104  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:15.847746  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:15.942563  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:16.151102  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:16.347079  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:16.347433  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:16.442488  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:16.652776  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:16.846634  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:16.846852  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:16.942649  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:17.151033  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:17.346629  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:17.346838  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:17.442765  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:17.651054  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:17.847567  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:17.848937  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:17.942741  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:18.150934  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:18.346228  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:18.346526  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:18.443284  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:18.651058  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:18.846461  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:18.846491  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:18.942615  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:19.150721  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:19.346224  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:19.346435  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:19.442531  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:19.650942  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:19.847278  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:19.847520  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:19.942363  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:20.151514  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:20.347271  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:20.347410  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:20.443230  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:20.650793  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:20.846430  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:20.846582  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:20.942736  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:21.151456  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:21.347419  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:21.347616  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:21.442875  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:21.650614  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:21.847112  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:21.847298  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:21.943260  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:22.150619  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:22.347673  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:22.347855  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:22.442975  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:22.650944  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:22.846621  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:22.846933  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:22.942704  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:23.151282  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:23.346528  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:23.346733  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:23.443096  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:23.650494  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:23.847406  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:23.847615  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:23.943124  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:24.150391  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:24.347094  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:24.347256  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:24.443156  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:24.651099  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:24.846837  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:24.846996  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:24.942832  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:25.150839  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:25.348079  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:25.348359  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:25.443351  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:25.651384  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:25.847285  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:25.847509  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:25.943278  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:26.151027  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:26.346767  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:26.347269  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:26.443391  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:26.650628  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:26.847785  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:26.848219  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:26.942331  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:27.151341  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:27.347323  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:27.347543  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:27.443241  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:27.650753  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:27.847907  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:27.848068  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:27.943064  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:28.150892  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:28.347770  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:28.348651  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:28.443054  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:28.651578  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:28.851019  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:28.851205  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:28.949271  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:29.151392  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:29.347209  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:29.347733  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:29.442896  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:29.650637  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:29.848378  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:29.848991  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:29.943064  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:30.151279  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:30.347681  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:30.347831  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:30.447974  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:30.650764  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:30.847310  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:30.848139  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:30.943001  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:31.150808  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:31.348727  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:31.349205  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:31.443499  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:31.651223  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:31.846838  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:31.846931  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:31.943000  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:32.150414  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:32.347510  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:32.347914  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:32.442620  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:32.651490  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:32.846746  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:32.846966  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:32.942816  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:33.149908  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:33.347128  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:33.347361  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 20:13:33.443207  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:33.651348  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:33.848630  489418 kapi.go:107] duration metric: took 1m2.005717766s to wait for kubernetes.io/minikube-addons=registry ...
	I1217 20:13:33.848813  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:33.943421  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:34.150994  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:34.346560  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:34.442742  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:34.651975  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:34.846473  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:34.942861  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:35.150781  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:35.346195  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:35.443488  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:35.651302  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:35.846310  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:35.951977  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:36.150796  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:36.347210  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:36.442643  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:36.652404  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:36.846697  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:36.943783  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:37.151614  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:37.347809  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:37.443113  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:37.659455  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:37.849700  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:37.942961  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:38.155742  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:38.347133  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:38.443903  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:38.658017  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:38.846813  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:38.947366  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:39.151990  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:39.346454  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:39.443067  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:39.652567  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:39.846995  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:39.943835  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:40.151127  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:40.354359  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:40.444680  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 20:13:40.658239  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:40.847185  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:40.947749  489418 kapi.go:107] duration metric: took 1m5.508261299s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1217 20:13:40.951845  489418 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-052340 cluster.
	I1217 20:13:40.955070  489418 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1217 20:13:40.958375  489418 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1217 20:13:41.151700  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:41.346866  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:41.650650  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:41.847761  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:42.153538  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:42.349847  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:42.652236  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:42.849641  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:43.151993  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:43.346026  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:43.651003  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:43.846380  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:44.150640  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:44.346617  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:44.650313  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:44.846535  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:45.151909  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:45.346522  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:45.651228  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:45.847022  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:46.150515  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:46.347690  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:46.651437  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:46.847403  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:47.151567  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:47.347043  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:47.650971  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:47.846766  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:48.152379  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:48.347114  489418 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 20:13:48.650622  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:48.847045  489418 kapi.go:107] duration metric: took 1m17.004054713s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1217 20:13:49.150499  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:49.654495  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:50.155612  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:50.651227  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:51.151864  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:51.655238  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:52.151195  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:52.650223  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:53.151035  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:53.650870  489418 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 20:13:54.151189  489418 kapi.go:107] duration metric: took 1m22.004325474s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1217 20:13:54.154233  489418 out.go:179] * Enabled addons: default-storageclass, nvidia-device-plugin, registry-creds, amd-gpu-device-plugin, storage-provisioner, ingress-dns, inspektor-gadget, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1217 20:13:54.156998  489418 addons.go:530] duration metric: took 1m30.040934347s for enable addons: enabled=[default-storageclass nvidia-device-plugin registry-creds amd-gpu-device-plugin storage-provisioner ingress-dns inspektor-gadget cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1217 20:13:54.157057  489418 start.go:247] waiting for cluster config update ...
	I1217 20:13:54.157083  489418 start.go:256] writing updated cluster config ...
	I1217 20:13:54.157410  489418 ssh_runner.go:195] Run: rm -f paused
	I1217 20:13:54.163774  489418 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:13:54.167399  489418 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gnsjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:54.173160  489418 pod_ready.go:94] pod "coredns-66bc5c9577-gnsjt" is "Ready"
	I1217 20:13:54.173187  489418 pod_ready.go:86] duration metric: took 5.758866ms for pod "coredns-66bc5c9577-gnsjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:54.175526  489418 pod_ready.go:83] waiting for pod "etcd-addons-052340" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:54.180596  489418 pod_ready.go:94] pod "etcd-addons-052340" is "Ready"
	I1217 20:13:54.180624  489418 pod_ready.go:86] duration metric: took 4.977182ms for pod "etcd-addons-052340" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:54.182921  489418 pod_ready.go:83] waiting for pod "kube-apiserver-addons-052340" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:54.187562  489418 pod_ready.go:94] pod "kube-apiserver-addons-052340" is "Ready"
	I1217 20:13:54.187612  489418 pod_ready.go:86] duration metric: took 4.661382ms for pod "kube-apiserver-addons-052340" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:54.189979  489418 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-052340" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:54.567622  489418 pod_ready.go:94] pod "kube-controller-manager-addons-052340" is "Ready"
	I1217 20:13:54.567652  489418 pod_ready.go:86] duration metric: took 377.648528ms for pod "kube-controller-manager-addons-052340" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:54.768369  489418 pod_ready.go:83] waiting for pod "kube-proxy-k6bpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:55.167456  489418 pod_ready.go:94] pod "kube-proxy-k6bpd" is "Ready"
	I1217 20:13:55.167483  489418 pod_ready.go:86] duration metric: took 399.08797ms for pod "kube-proxy-k6bpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:55.367989  489418 pod_ready.go:83] waiting for pod "kube-scheduler-addons-052340" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:55.768538  489418 pod_ready.go:94] pod "kube-scheduler-addons-052340" is "Ready"
	I1217 20:13:55.768568  489418 pod_ready.go:86] duration metric: took 400.507771ms for pod "kube-scheduler-addons-052340" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:13:55.768583  489418 pod_ready.go:40] duration metric: took 1.604772229s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:13:55.824972  489418 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1217 20:13:55.828325  489418 out.go:179] * Done! kubectl is now configured to use "addons-052340" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 20:14:25 addons-052340 crio[826]: time="2025-12-17T20:14:25.586731339Z" level=info msg="Started container" PID=5330 containerID=bb22606af162e00769968a91b70cfe5bea868806d76a185639200cbb59d4fc7d description=default/test-local-path/busybox id=babf6db8-3dad-46db-bc20-c9f9b8f1c16e name=/runtime.v1.RuntimeService/StartContainer sandboxID=d06de1199e81e662321877df589d2e737053ae9b238b2fc789fe968a2b9389f3
	Dec 17 20:14:26 addons-052340 crio[826]: time="2025-12-17T20:14:26.882592481Z" level=info msg="Stopping pod sandbox: d06de1199e81e662321877df589d2e737053ae9b238b2fc789fe968a2b9389f3" id=63185aef-df1a-46f3-88f1-ce1c3da67516 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 17 20:14:26 addons-052340 crio[826]: time="2025-12-17T20:14:26.882905689Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:d06de1199e81e662321877df589d2e737053ae9b238b2fc789fe968a2b9389f3 UID:b07ebbc5-4394-4f20-8880-7d2c85846d1e NetNS:/var/run/netns/f05ae161-0692-4543-82ca-a43cdd8ac99d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40020f86c0}] Aliases:map[]}"
	Dec 17 20:14:26 addons-052340 crio[826]: time="2025-12-17T20:14:26.883053038Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Dec 17 20:14:26 addons-052340 crio[826]: time="2025-12-17T20:14:26.901647728Z" level=info msg="Stopped pod sandbox: d06de1199e81e662321877df589d2e737053ae9b238b2fc789fe968a2b9389f3" id=63185aef-df1a-46f3-88f1-ce1c3da67516 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 17 20:14:28 addons-052340 crio[826]: time="2025-12-17T20:14:28.587488943Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d/POD" id=19faf86f-1af4-4743-b498-d4fe7b8c20c7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 20:14:28 addons-052340 crio[826]: time="2025-12-17T20:14:28.587554101Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:14:28 addons-052340 crio[826]: time="2025-12-17T20:14:28.600409616Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d Namespace:local-path-storage ID:f93f2a2976160635db0abc1c773616b543d5e9313f487cf9112563aaf4204419 UID:0fdb8069-0f31-4b88-8198-8c3acc905289 NetNS:/var/run/netns/3540faef-8916-461c-98ce-65ca54bc3ead Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40020f8af8}] Aliases:map[]}"
	Dec 17 20:14:28 addons-052340 crio[826]: time="2025-12-17T20:14:28.600441305Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d to CNI network \"kindnet\" (type=ptp)"
	Dec 17 20:14:28 addons-052340 crio[826]: time="2025-12-17T20:14:28.624250982Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d Namespace:local-path-storage ID:f93f2a2976160635db0abc1c773616b543d5e9313f487cf9112563aaf4204419 UID:0fdb8069-0f31-4b88-8198-8c3acc905289 NetNS:/var/run/netns/3540faef-8916-461c-98ce-65ca54bc3ead Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40020f8af8}] Aliases:map[]}"
	Dec 17 20:14:28 addons-052340 crio[826]: time="2025-12-17T20:14:28.624790834Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d for CNI network kindnet (type=ptp)"
	Dec 17 20:14:28 addons-052340 crio[826]: time="2025-12-17T20:14:28.635491553Z" level=info msg="Ran pod sandbox f93f2a2976160635db0abc1c773616b543d5e9313f487cf9112563aaf4204419 with infra container: local-path-storage/helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d/POD" id=19faf86f-1af4-4743-b498-d4fe7b8c20c7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 20:14:28 addons-052340 crio[826]: time="2025-12-17T20:14:28.639865721Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=57d26e4e-9b7a-4953-aca2-7ff5c7c4ed58 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:14:28 addons-052340 crio[826]: time="2025-12-17T20:14:28.645368217Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=7655d7ed-e719-46ed-9b8e-30f7120baced name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:14:28 addons-052340 crio[826]: time="2025-12-17T20:14:28.65338209Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d/helper-pod" id=2275316c-e951-415a-939a-a052d47f0af7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:14:28 addons-052340 crio[826]: time="2025-12-17T20:14:28.653669494Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:14:28 addons-052340 crio[826]: time="2025-12-17T20:14:28.673194562Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:14:28 addons-052340 crio[826]: time="2025-12-17T20:14:28.674489537Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:14:28 addons-052340 crio[826]: time="2025-12-17T20:14:28.700410703Z" level=info msg="Created container bbbd5e54b06ff9472041ba2da2057b54bd3f4665d625a403f9a941cbd99d3e61: local-path-storage/helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d/helper-pod" id=2275316c-e951-415a-939a-a052d47f0af7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:14:28 addons-052340 crio[826]: time="2025-12-17T20:14:28.705424851Z" level=info msg="Starting container: bbbd5e54b06ff9472041ba2da2057b54bd3f4665d625a403f9a941cbd99d3e61" id=8450d1dd-1d78-4956-948a-0642f7b5a539 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:14:28 addons-052340 crio[826]: time="2025-12-17T20:14:28.708024336Z" level=info msg="Started container" PID=5432 containerID=bbbd5e54b06ff9472041ba2da2057b54bd3f4665d625a403f9a941cbd99d3e61 description=local-path-storage/helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d/helper-pod id=8450d1dd-1d78-4956-948a-0642f7b5a539 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f93f2a2976160635db0abc1c773616b543d5e9313f487cf9112563aaf4204419
	Dec 17 20:14:29 addons-052340 crio[826]: time="2025-12-17T20:14:29.899149896Z" level=info msg="Stopping pod sandbox: f93f2a2976160635db0abc1c773616b543d5e9313f487cf9112563aaf4204419" id=d30830f2-0f7a-4bcf-ab30-3862041221d7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 17 20:14:29 addons-052340 crio[826]: time="2025-12-17T20:14:29.899606186Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d Namespace:local-path-storage ID:f93f2a2976160635db0abc1c773616b543d5e9313f487cf9112563aaf4204419 UID:0fdb8069-0f31-4b88-8198-8c3acc905289 NetNS:/var/run/netns/3540faef-8916-461c-98ce-65ca54bc3ead Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40020f9490}] Aliases:map[]}"
	Dec 17 20:14:29 addons-052340 crio[826]: time="2025-12-17T20:14:29.899934295Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d from CNI network \"kindnet\" (type=ptp)"
	Dec 17 20:14:29 addons-052340 crio[826]: time="2025-12-17T20:14:29.932644498Z" level=info msg="Stopped pod sandbox: f93f2a2976160635db0abc1c773616b543d5e9313f487cf9112563aaf4204419" id=d30830f2-0f7a-4bcf-ab30-3862041221d7 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	bbbd5e54b06ff       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   f93f2a2976160       helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d   local-path-storage
	bb22606af162e       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            4 seconds ago        Exited              busybox                                  0                   d06de1199e81e       test-local-path                                              default
	b8e1062d10313       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            8 seconds ago        Exited              helper-pod                               0                   0ce11dd85de7f       helper-pod-create-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d   local-path-storage
	ae20bbb365754       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          9 seconds ago        Exited              registry-test                            0                   141ae4e7b2f03       registry-test                                                default
	427c6ab355b3b       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          31 seconds ago       Running             busybox                                  0                   ffb09119c6423       busybox                                                      default
	a40f8c4c9d667       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          37 seconds ago       Running             csi-snapshotter                          0                   aea6feee2e609       csi-hostpathplugin-r5tvz                                     kube-system
	5b40645a2f296       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          38 seconds ago       Running             csi-provisioner                          0                   aea6feee2e609       csi-hostpathplugin-r5tvz                                     kube-system
	4a7c07a0b9754       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            40 seconds ago       Running             liveness-probe                           0                   aea6feee2e609       csi-hostpathplugin-r5tvz                                     kube-system
	d3baa47458bc0       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           41 seconds ago       Running             hostpath                                 0                   aea6feee2e609       csi-hostpathplugin-r5tvz                                     kube-system
	87621426512f4       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             42 seconds ago       Running             controller                               0                   be4e5da881a27       ingress-nginx-controller-85d4c799dd-c8vnl                    ingress-nginx
	fcc9a7828f6b8       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                48 seconds ago       Running             node-driver-registrar                    0                   aea6feee2e609       csi-hostpathplugin-r5tvz                                     kube-system
	8268763706c3e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 49 seconds ago       Running             gcp-auth                                 0                   b097189bf7bab       gcp-auth-78565c9fb4-sc72c                                    gcp-auth
	c449a66d2de59       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            53 seconds ago       Running             gadget                                   0                   99974f7283fd0       gadget-sw4gn                                                 gadget
	765f61bbb3ba8       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              56 seconds ago       Running             registry-proxy                           0                   e469a47e52707       registry-proxy-5q5m2                                         kube-system
	d33875f7a8b74       nvcr.io/nvidia/k8s-device-plugin@sha256:10b7b747520ba2314061b5b319d3b2766b9cec1fd9404109c607e85b30af6905                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   63bc0e22d348f       nvidia-device-plugin-daemonset-b7cpw                         kube-system
	79a9e5f943348       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   58e6e4cb7fb24       snapshot-controller-7d9fbc56b8-4pf8h                         kube-system
	0ed0b3ea99114       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              patch                                    0                   3f2c8eac68227       ingress-nginx-admission-patch-h9l82                          ingress-nginx
	e8ab91246e2c9       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               About a minute ago   Running             cloud-spanner-emulator                   0                   89789ba91cbea       cloud-spanner-emulator-5bdddb765-dfn99                       default
	91ba82c9d4363       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              create                                   0                   7c9b1656770b5       ingress-nginx-admission-create-tqlpn                         ingress-nginx
	8d7f5c62629eb       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   38bc69ec4fd7b       csi-hostpath-resizer-0                                       kube-system
	af528db69b5d3       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   aea6feee2e609       csi-hostpathplugin-r5tvz                                     kube-system
	028c23d163f91       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   ab8dfd6e45a7c       kube-ingress-dns-minikube                                    kube-system
	85713e1610062       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   f632bdc32082f       registry-6b586f9694-h2xmf                                    kube-system
	cb82734cbf7f9       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   9107e046fafae       metrics-server-85b7d694d7-5g267                              kube-system
	f60e3e143ad43       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   d1ea2f9507cc2       csi-hostpath-attacher-0                                      kube-system
	5f30c60c285e3       docker.io/marcnuri/yakd@sha256:0b7e831df7fe4ad1c8c56a736a8d66bd86e243f6777d3c512ead47199d8fbe1a                                              About a minute ago   Running             yakd                                     0                   1c2c566814bbb       yakd-dashboard-6654c87f9b-pgp6s                              yakd-dashboard
	a0efa1d77e190       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   ff8e53075302d       local-path-provisioner-648f6765c9-gtfnn                      local-path-storage
	18d9f4acabfa7       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   5b40c0b75bb78       snapshot-controller-7d9fbc56b8-7528t                         kube-system
	fead2bfafa736       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   67cdd3f664299       coredns-66bc5c9577-gnsjt                                     kube-system
	2b6e269a93d8b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   c4e7bbf51dc90       storage-provisioner                                          kube-system
	7fd15b6b59471       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3                                           2 minutes ago        Running             kindnet-cni                              0                   9efa355305cae       kindnet-sk69j                                                kube-system
	6311d7d7f6f04       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                                                             2 minutes ago        Running             kube-proxy                               0                   21e8d3982f40d       kube-proxy-k6bpd                                             kube-system
	6481ede2a21b9       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                                                             2 minutes ago        Running             kube-scheduler                           0                   0aed8ad08d915       kube-scheduler-addons-052340                                 kube-system
	873499eab93d6       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                                                             2 minutes ago        Running             kube-apiserver                           0                   0218faf6e1933       kube-apiserver-addons-052340                                 kube-system
	45a4f23a594c0       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             2 minutes ago        Running             etcd                                     0                   e851b97e5df79       etcd-addons-052340                                           kube-system
	dd0351330b604       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                                                             2 minutes ago        Running             kube-controller-manager                  0                   89e40a49d3838       kube-controller-manager-addons-052340                        kube-system
	
	
	==> coredns [fead2bfafa7367c08236e1dd9040e848a879a067c3b70a78578d71429854ebf6] <==
	[INFO] 10.244.0.16:38007 - 5255 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002104383s
	[INFO] 10.244.0.16:38007 - 23707 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000109752s
	[INFO] 10.244.0.16:38007 - 60262 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000192797s
	[INFO] 10.244.0.16:43152 - 49778 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000163045s
	[INFO] 10.244.0.16:43152 - 49565 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00007973s
	[INFO] 10.244.0.16:41345 - 31905 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000087837s
	[INFO] 10.244.0.16:41345 - 31700 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000067636s
	[INFO] 10.244.0.16:52993 - 3472 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080517s
	[INFO] 10.244.0.16:52993 - 3300 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067668s
	[INFO] 10.244.0.16:48520 - 13721 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001134243s
	[INFO] 10.244.0.16:48520 - 13499 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001081558s
	[INFO] 10.244.0.16:54412 - 5994 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000112329s
	[INFO] 10.244.0.16:54412 - 5814 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000069884s
	[INFO] 10.244.0.20:37887 - 60931 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000151623s
	[INFO] 10.244.0.20:49921 - 57504 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000156284s
	[INFO] 10.244.0.20:44766 - 63423 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000151927s
	[INFO] 10.244.0.20:57743 - 63811 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000163579s
	[INFO] 10.244.0.20:49612 - 37551 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000194282s
	[INFO] 10.244.0.20:60295 - 48774 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126573s
	[INFO] 10.244.0.20:48399 - 2726 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001564179s
	[INFO] 10.244.0.20:43510 - 48024 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00215597s
	[INFO] 10.244.0.20:43607 - 4893 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001684081s
	[INFO] 10.244.0.20:43732 - 64296 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001687183s
	[INFO] 10.244.0.23:33844 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000174147s
	[INFO] 10.244.0.23:57182 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000099546s
	
	
	==> describe nodes <==
	Name:               addons-052340
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-052340
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=addons-052340
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T20_12_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-052340
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-052340"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 20:12:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-052340
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 20:14:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 20:14:21 +0000   Wed, 17 Dec 2025 20:12:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 20:14:21 +0000   Wed, 17 Dec 2025 20:12:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 20:14:21 +0000   Wed, 17 Dec 2025 20:12:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 20:14:21 +0000   Wed, 17 Dec 2025 20:12:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-052340
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                d9215e55-a3af-4a96-a35c-a8b4e9371aea
	  Boot ID:                    7e62835b-3b3c-4293-85c7-fb0aa46e4d00
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  default                     cloud-spanner-emulator-5bdddb765-dfn99       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  gadget                      gadget-sw4gn                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  gcp-auth                    gcp-auth-78565c9fb4-sc72c                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-c8vnl    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         119s
	  kube-system                 coredns-66bc5c9577-gnsjt                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m6s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 csi-hostpathplugin-r5tvz                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 etcd-addons-052340                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m11s
	  kube-system                 kindnet-sk69j                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m7s
	  kube-system                 kube-apiserver-addons-052340                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-controller-manager-addons-052340        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-k6bpd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-scheduler-addons-052340                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 metrics-server-85b7d694d7-5g267              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m
	  kube-system                 nvidia-device-plugin-daemonset-b7cpw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 registry-6b586f9694-h2xmf                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 registry-creds-764b6fb674-4s27d              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 registry-proxy-5q5m2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 snapshot-controller-7d9fbc56b8-4pf8h         0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 snapshot-controller-7d9fbc56b8-7528t         0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  local-path-storage          local-path-provisioner-648f6765c9-gtfnn      0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  yakd-dashboard              yakd-dashboard-6654c87f9b-pgp6s              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m5s   kube-proxy       
	  Normal   Starting                 2m12s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m12s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m11s  kubelet          Node addons-052340 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m11s  kubelet          Node addons-052340 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m11s  kubelet          Node addons-052340 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m7s   node-controller  Node addons-052340 event: Registered Node addons-052340 in Controller
	  Normal   NodeReady                112s   kubelet          Node addons-052340 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec17 19:02] hrtimer: interrupt took 71042327 ns
	[Dec17 20:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 20:12] overlayfs: idmapped layers are currently not supported
	[  +0.080288] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [45a4f23a594c0e4311b2eac64abad5f26d74229e5aa0a19e11b7b7c18f0e227d] <==
	{"level":"warn","ts":"2025-12-17T20:12:14.587122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.607118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.625375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.683690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.712657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.732601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.788258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.828131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.849275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.920203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.944635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:14.977457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:15.024085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:15.119501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:15.129570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:15.140512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:15.191606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:15.211329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:15.379827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:32.478622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:32.497865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:41.060987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:41.072325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:41.108339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:12:41.124628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50372","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [8268763706c3ec63a63b1f57cba0bf634e39c5b8a6b84bd8166ab9a00e8d2168] <==
	2025/12/17 20:13:40 GCP Auth Webhook started!
	2025/12/17 20:13:56 Ready to marshal response ...
	2025/12/17 20:13:56 Ready to write response ...
	2025/12/17 20:13:56 Ready to marshal response ...
	2025/12/17 20:13:56 Ready to write response ...
	2025/12/17 20:13:56 Ready to marshal response ...
	2025/12/17 20:13:56 Ready to write response ...
	2025/12/17 20:14:18 Ready to marshal response ...
	2025/12/17 20:14:18 Ready to write response ...
	2025/12/17 20:14:19 Ready to marshal response ...
	2025/12/17 20:14:19 Ready to write response ...
	2025/12/17 20:14:19 Ready to marshal response ...
	2025/12/17 20:14:19 Ready to write response ...
	2025/12/17 20:14:28 Ready to marshal response ...
	2025/12/17 20:14:28 Ready to write response ...
	
	
	==> kernel <==
	 20:14:30 up  2:56,  0 user,  load average: 2.57, 2.02, 2.16
	Linux addons-052340 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7fd15b6b594715c56d64ee54d7a6852f04c749cfdf1c08a4bcf60d7a2e38d3b6] <==
	I1217 20:12:28.411955       1 controller.go:711] "Syncing nftables rules"
	I1217 20:12:38.129423       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:12:38.129531       1 main.go:301] handling current node
	I1217 20:12:48.130560       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:12:48.130593       1 main.go:301] handling current node
	I1217 20:12:58.132826       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:12:58.132914       1 main.go:301] handling current node
	I1217 20:13:08.129686       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:13:08.129799       1 main.go:301] handling current node
	I1217 20:13:18.138034       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:13:18.138070       1 main.go:301] handling current node
	I1217 20:13:28.130171       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:13:28.130384       1 main.go:301] handling current node
	I1217 20:13:38.130803       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:13:38.130989       1 main.go:301] handling current node
	I1217 20:13:48.131784       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:13:48.131826       1 main.go:301] handling current node
	I1217 20:13:58.130010       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:13:58.130067       1 main.go:301] handling current node
	I1217 20:14:08.134520       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:14:08.134555       1 main.go:301] handling current node
	I1217 20:14:18.130778       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:14:18.130889       1 main.go:301] handling current node
	I1217 20:14:28.129299       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 20:14:28.129458       1 main.go:301] handling current node
	
	
	==> kube-apiserver [873499eab93d6e655f10d1bf2c19abe29efc74e958b7418723b5c7c02699e059] <==
	W1217 20:12:32.478360       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1217 20:12:32.497631       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1217 20:12:35.301416       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.99.249.63"}
	W1217 20:12:38.568331       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.249.63:443: connect: connection refused
	E1217 20:12:38.568377       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.249.63:443: connect: connection refused" logger="UnhandledError"
	W1217 20:12:38.574012       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.249.63:443: connect: connection refused
	E1217 20:12:38.574056       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.249.63:443: connect: connection refused" logger="UnhandledError"
	W1217 20:12:38.672984       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.249.63:443: connect: connection refused
	E1217 20:12:38.673109       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.249.63:443: connect: connection refused" logger="UnhandledError"
	W1217 20:12:41.054536       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1217 20:12:41.072285       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 20:12:41.102127       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1217 20:12:41.118259       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	E1217 20:13:02.486324       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.215.219:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.215.219:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.215.219:443: connect: connection refused" logger="UnhandledError"
	W1217 20:13:02.486801       1 handler_proxy.go:99] no RequestInfo found in the context
	E1217 20:13:02.486972       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1217 20:13:02.487756       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.215.219:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.215.219:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.215.219:443: connect: connection refused" logger="UnhandledError"
	E1217 20:13:02.494634       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.215.219:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.215.219:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.215.219:443: connect: connection refused" logger="UnhandledError"
	I1217 20:13:02.592605       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1217 20:14:05.867119       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55060: use of closed network connection
	E1217 20:14:06.110763       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55086: use of closed network connection
	E1217 20:14:06.239986       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55102: use of closed network connection
	
	
	==> kube-controller-manager [dd0351330b604610224568f93b842fec24e6e4e932b309a6741ffab6f0d17b9f] <==
	I1217 20:12:23.196964       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 20:12:23.196967       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 20:12:23.197285       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 20:12:23.197543       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1217 20:12:23.197769       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 20:12:23.197308       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 20:12:23.197859       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 20:12:23.197322       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 20:12:23.198021       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1217 20:12:23.202370       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 20:12:23.202471       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 20:12:23.204811       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 20:12:23.210980       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 20:12:23.216224       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 20:12:23.233666       1 shared_informer.go:356] "Caches are synced" controller="job"
	E1217 20:12:30.750065       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1217 20:12:30.775341       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1217 20:12:43.172311       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1217 20:12:53.175090       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1217 20:12:53.175518       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1217 20:12:53.175641       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1217 20:12:53.235417       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1217 20:12:53.250843       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1217 20:12:53.277731       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 20:12:53.353012       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [6311d7d7f6f04d57961651f5acfdebe6792efa3db55df12e18440d726be5abcf] <==
	I1217 20:12:25.151776       1 server_linux.go:53] "Using iptables proxy"
	I1217 20:12:25.253551       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 20:12:25.354279       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 20:12:25.354344       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1217 20:12:25.354460       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 20:12:25.421511       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 20:12:25.421573       1 server_linux.go:132] "Using iptables Proxier"
	I1217 20:12:25.435839       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 20:12:25.436282       1 server.go:527] "Version info" version="v1.34.3"
	I1217 20:12:25.436307       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:12:25.445057       1 config.go:200] "Starting service config controller"
	I1217 20:12:25.445076       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 20:12:25.445095       1 config.go:106] "Starting endpoint slice config controller"
	I1217 20:12:25.445099       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 20:12:25.445117       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 20:12:25.445129       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 20:12:25.446188       1 config.go:309] "Starting node config controller"
	I1217 20:12:25.446208       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 20:12:25.446215       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 20:12:25.546379       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 20:12:25.546420       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 20:12:25.546461       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6481ede2a21b9816559407d8564d57427a11b523832053f995eb07f4d1ce83bc] <==
	I1217 20:12:17.384400       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:12:17.387192       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 20:12:17.387291       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:12:17.387314       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:12:17.387332       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1217 20:12:17.390308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 20:12:17.390409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 20:12:17.390470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 20:12:17.400062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1217 20:12:17.400179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 20:12:17.404361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 20:12:17.404546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 20:12:17.404820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 20:12:17.404987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 20:12:17.405093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 20:12:17.407109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 20:12:17.407231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 20:12:17.407310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 20:12:17.407555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 20:12:17.407697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 20:12:17.407810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 20:12:17.407900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 20:12:17.407982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 20:12:17.408065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1217 20:12:18.488293       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 20:14:27 addons-052340 kubelet[1289]: I1217 20:14:27.065835    1289 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b07ebbc5-4394-4f20-8880-7d2c85846d1e-kube-api-access-jjf9r" (OuterVolumeSpecName: "kube-api-access-jjf9r") pod "b07ebbc5-4394-4f20-8880-7d2c85846d1e" (UID: "b07ebbc5-4394-4f20-8880-7d2c85846d1e"). InnerVolumeSpecName "kube-api-access-jjf9r". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 17 20:14:27 addons-052340 kubelet[1289]: I1217 20:14:27.159240    1289 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jjf9r\" (UniqueName: \"kubernetes.io/projected/b07ebbc5-4394-4f20-8880-7d2c85846d1e-kube-api-access-jjf9r\") on node \"addons-052340\" DevicePath \"\""
	Dec 17 20:14:27 addons-052340 kubelet[1289]: I1217 20:14:27.159286    1289 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b07ebbc5-4394-4f20-8880-7d2c85846d1e-gcp-creds\") on node \"addons-052340\" DevicePath \"\""
	Dec 17 20:14:27 addons-052340 kubelet[1289]: I1217 20:14:27.159298    1289 reconciler_common.go:299] "Volume detached for volume \"pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d\" (UniqueName: \"kubernetes.io/host-path/b07ebbc5-4394-4f20-8880-7d2c85846d1e-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d\") on node \"addons-052340\" DevicePath \"\""
	Dec 17 20:14:27 addons-052340 kubelet[1289]: I1217 20:14:27.887552    1289 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d06de1199e81e662321877df589d2e737053ae9b238b2fc789fe968a2b9389f3"
	Dec 17 20:14:28 addons-052340 kubelet[1289]: I1217 20:14:28.370674    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/0fdb8069-0f31-4b88-8198-8c3acc905289-script\") pod \"helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d\" (UID: \"0fdb8069-0f31-4b88-8198-8c3acc905289\") " pod="local-path-storage/helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d"
	Dec 17 20:14:28 addons-052340 kubelet[1289]: I1217 20:14:28.370744    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgp25\" (UniqueName: \"kubernetes.io/projected/0fdb8069-0f31-4b88-8198-8c3acc905289-kube-api-access-zgp25\") pod \"helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d\" (UID: \"0fdb8069-0f31-4b88-8198-8c3acc905289\") " pod="local-path-storage/helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d"
	Dec 17 20:14:28 addons-052340 kubelet[1289]: I1217 20:14:28.370776    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0fdb8069-0f31-4b88-8198-8c3acc905289-gcp-creds\") pod \"helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d\" (UID: \"0fdb8069-0f31-4b88-8198-8c3acc905289\") " pod="local-path-storage/helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d"
	Dec 17 20:14:28 addons-052340 kubelet[1289]: I1217 20:14:28.370815    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/0fdb8069-0f31-4b88-8198-8c3acc905289-data\") pod \"helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d\" (UID: \"0fdb8069-0f31-4b88-8198-8c3acc905289\") " pod="local-path-storage/helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d"
	Dec 17 20:14:28 addons-052340 kubelet[1289]: I1217 20:14:28.927763    1289 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b07ebbc5-4394-4f20-8880-7d2c85846d1e" path="/var/lib/kubelet/pods/b07ebbc5-4394-4f20-8880-7d2c85846d1e/volumes"
	Dec 17 20:14:30 addons-052340 kubelet[1289]: I1217 20:14:30.091195    1289 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/0fdb8069-0f31-4b88-8198-8c3acc905289-data\") pod \"0fdb8069-0f31-4b88-8198-8c3acc905289\" (UID: \"0fdb8069-0f31-4b88-8198-8c3acc905289\") "
	Dec 17 20:14:30 addons-052340 kubelet[1289]: I1217 20:14:30.092613    1289 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0fdb8069-0f31-4b88-8198-8c3acc905289-gcp-creds\") pod \"0fdb8069-0f31-4b88-8198-8c3acc905289\" (UID: \"0fdb8069-0f31-4b88-8198-8c3acc905289\") "
	Dec 17 20:14:30 addons-052340 kubelet[1289]: I1217 20:14:30.092666    1289 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/0fdb8069-0f31-4b88-8198-8c3acc905289-script\") pod \"0fdb8069-0f31-4b88-8198-8c3acc905289\" (UID: \"0fdb8069-0f31-4b88-8198-8c3acc905289\") "
	Dec 17 20:14:30 addons-052340 kubelet[1289]: I1217 20:14:30.092723    1289 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgp25\" (UniqueName: \"kubernetes.io/projected/0fdb8069-0f31-4b88-8198-8c3acc905289-kube-api-access-zgp25\") pod \"0fdb8069-0f31-4b88-8198-8c3acc905289\" (UID: \"0fdb8069-0f31-4b88-8198-8c3acc905289\") "
	Dec 17 20:14:30 addons-052340 kubelet[1289]: I1217 20:14:30.093385    1289 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fdb8069-0f31-4b88-8198-8c3acc905289-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "0fdb8069-0f31-4b88-8198-8c3acc905289" (UID: "0fdb8069-0f31-4b88-8198-8c3acc905289"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 17 20:14:30 addons-052340 kubelet[1289]: I1217 20:14:30.093466    1289 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fdb8069-0f31-4b88-8198-8c3acc905289-data" (OuterVolumeSpecName: "data") pod "0fdb8069-0f31-4b88-8198-8c3acc905289" (UID: "0fdb8069-0f31-4b88-8198-8c3acc905289"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 17 20:14:30 addons-052340 kubelet[1289]: I1217 20:14:30.093818    1289 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fdb8069-0f31-4b88-8198-8c3acc905289-script" (OuterVolumeSpecName: "script") pod "0fdb8069-0f31-4b88-8198-8c3acc905289" (UID: "0fdb8069-0f31-4b88-8198-8c3acc905289"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Dec 17 20:14:30 addons-052340 kubelet[1289]: I1217 20:14:30.108899    1289 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fdb8069-0f31-4b88-8198-8c3acc905289-kube-api-access-zgp25" (OuterVolumeSpecName: "kube-api-access-zgp25") pod "0fdb8069-0f31-4b88-8198-8c3acc905289" (UID: "0fdb8069-0f31-4b88-8198-8c3acc905289"). InnerVolumeSpecName "kube-api-access-zgp25". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 17 20:14:30 addons-052340 kubelet[1289]: I1217 20:14:30.193869    1289 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0fdb8069-0f31-4b88-8198-8c3acc905289-gcp-creds\") on node \"addons-052340\" DevicePath \"\""
	Dec 17 20:14:30 addons-052340 kubelet[1289]: I1217 20:14:30.193946    1289 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/0fdb8069-0f31-4b88-8198-8c3acc905289-script\") on node \"addons-052340\" DevicePath \"\""
	Dec 17 20:14:30 addons-052340 kubelet[1289]: I1217 20:14:30.193961    1289 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zgp25\" (UniqueName: \"kubernetes.io/projected/0fdb8069-0f31-4b88-8198-8c3acc905289-kube-api-access-zgp25\") on node \"addons-052340\" DevicePath \"\""
	Dec 17 20:14:30 addons-052340 kubelet[1289]: I1217 20:14:30.193973    1289 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/0fdb8069-0f31-4b88-8198-8c3acc905289-data\") on node \"addons-052340\" DevicePath \"\""
	Dec 17 20:14:30 addons-052340 kubelet[1289]: I1217 20:14:30.904495    1289 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f93f2a2976160635db0abc1c773616b543d5e9313f487cf9112563aaf4204419"
	Dec 17 20:14:30 addons-052340 kubelet[1289]: E1217 20:14:30.906603    1289 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d\" is forbidden: User \"system:node:addons-052340\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-052340' and this object" podUID="0fdb8069-0f31-4b88-8198-8c3acc905289" pod="local-path-storage/helper-pod-delete-pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d"
	Dec 17 20:14:30 addons-052340 kubelet[1289]: I1217 20:14:30.922083    1289 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fdb8069-0f31-4b88-8198-8c3acc905289" path="/var/lib/kubelet/pods/0fdb8069-0f31-4b88-8198-8c3acc905289/volumes"
	
	
	==> storage-provisioner [2b6e269a93d8bccdf1b28bd4369628c766dbf092b4be199c7da9d8d756358c68] <==
	W1217 20:14:06.739008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:08.742433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:08.747147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:10.750290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:10.755343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:12.758988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:12.764570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:14.768521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:14.773102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:16.776548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:16.781886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:18.790816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:18.797571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:20.801419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:20.806834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:22.809574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:22.814230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:24.817428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:24.822092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:26.825301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:26.830026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:28.850128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:28.872281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:30.875966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:14:30.880786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-052340 -n addons-052340
helpers_test.go:270: (dbg) Run:  kubectl --context addons-052340 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-tqlpn ingress-nginx-admission-patch-h9l82 registry-creds-764b6fb674-4s27d
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-052340 describe pod ingress-nginx-admission-create-tqlpn ingress-nginx-admission-patch-h9l82 registry-creds-764b6fb674-4s27d
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-052340 describe pod ingress-nginx-admission-create-tqlpn ingress-nginx-admission-patch-h9l82 registry-creds-764b6fb674-4s27d: exit status 1 (88.34865ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-tqlpn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-h9l82" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-4s27d" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-052340 describe pod ingress-nginx-admission-create-tqlpn ingress-nginx-admission-patch-h9l82 registry-creds-764b6fb674-4s27d: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-052340 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-052340 addons disable headlamp --alsologtostderr -v=1: exit status 11 (317.607452ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:14:32.013971  496710 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:14:32.014684  496710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:14:32.014703  496710 out.go:374] Setting ErrFile to fd 2...
	I1217 20:14:32.014711  496710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:14:32.015072  496710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:14:32.015435  496710 mustload.go:66] Loading cluster: addons-052340
	I1217 20:14:32.015925  496710 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:14:32.015949  496710 addons.go:622] checking whether the cluster is paused
	I1217 20:14:32.016068  496710 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:14:32.016085  496710 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:14:32.016726  496710 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:14:32.041878  496710 ssh_runner.go:195] Run: systemctl --version
	I1217 20:14:32.041943  496710 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:14:32.067744  496710 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:14:32.211099  496710 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:14:32.211229  496710 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:14:32.242163  496710 cri.go:89] found id: "a40f8c4c9d66754b6dd5a9559839fcf1fe8c00a06c71ce2f09926993d2c3d9eb"
	I1217 20:14:32.242185  496710 cri.go:89] found id: "5b40645a2f29661a7d2787522f23071b2c894de3a6b5b7638a2111a3f9485374"
	I1217 20:14:32.242190  496710 cri.go:89] found id: "4a7c07a0b97549aa9a9ce8e8e19127c9c8cf4e24abdccd8f61dcd057b97c9d88"
	I1217 20:14:32.242193  496710 cri.go:89] found id: "d3baa47458bc085e8d55815cf0497461a119e95081ce45cce1d577901a7dbf8d"
	I1217 20:14:32.242196  496710 cri.go:89] found id: "fcc9a7828f6b875df447059eabaf032b1c2d17e3c003bd1948f528d321bd3c85"
	I1217 20:14:32.242199  496710 cri.go:89] found id: "765f61bbb3ba86834abad3ede0a99451b487e6dc73b004c63a20a8f02e1d84a5"
	I1217 20:14:32.242202  496710 cri.go:89] found id: "d33875f7a8b74239db487ffa04bcf41c083f1ad65f29c43e2ae8b0dd783b521c"
	I1217 20:14:32.242205  496710 cri.go:89] found id: "79a9e5f9433489a03f4b2296098c887736f30cde921105d6ffa972cf7406abb4"
	I1217 20:14:32.242208  496710 cri.go:89] found id: "8d7f5c62629eb9e2812369690fb943f4e57ef24985d574b704ffc9b79a56d549"
	I1217 20:14:32.242214  496710 cri.go:89] found id: "af528db69b5d34f2ebcd038272fbfe9866e82f22a612276fb737683d83c256b9"
	I1217 20:14:32.242217  496710 cri.go:89] found id: "028c23d163f91b2b1e5d8071a70c7fb133ac203b414302242719cd40dba8d733"
	I1217 20:14:32.242220  496710 cri.go:89] found id: "85713e1610062bb4691b694312bf4faa71e96232dd94bc9fc564053646cde3a0"
	I1217 20:14:32.242223  496710 cri.go:89] found id: "cb82734cbf7f969e92c5a0b06400eb1d90d484748c850c4888051348e720776e"
	I1217 20:14:32.242226  496710 cri.go:89] found id: "f60e3e143ad43d5768067a6b27a4ece464edd1136c6931405c68bea040dd1097"
	I1217 20:14:32.242229  496710 cri.go:89] found id: "18d9f4acabfa7df7f42816d2545211aeda7f7362f2af1f80e188ea780addc6f8"
	I1217 20:14:32.242238  496710 cri.go:89] found id: "fead2bfafa7367c08236e1dd9040e848a879a067c3b70a78578d71429854ebf6"
	I1217 20:14:32.242241  496710 cri.go:89] found id: "2b6e269a93d8bccdf1b28bd4369628c766dbf092b4be199c7da9d8d756358c68"
	I1217 20:14:32.242248  496710 cri.go:89] found id: "7fd15b6b594715c56d64ee54d7a6852f04c749cfdf1c08a4bcf60d7a2e38d3b6"
	I1217 20:14:32.242251  496710 cri.go:89] found id: "6311d7d7f6f04d57961651f5acfdebe6792efa3db55df12e18440d726be5abcf"
	I1217 20:14:32.242254  496710 cri.go:89] found id: "6481ede2a21b9816559407d8564d57427a11b523832053f995eb07f4d1ce83bc"
	I1217 20:14:32.242258  496710 cri.go:89] found id: "873499eab93d6e655f10d1bf2c19abe29efc74e958b7418723b5c7c02699e059"
	I1217 20:14:32.242261  496710 cri.go:89] found id: "45a4f23a594c0e4311b2eac64abad5f26d74229e5aa0a19e11b7b7c18f0e227d"
	I1217 20:14:32.242264  496710 cri.go:89] found id: "dd0351330b604610224568f93b842fec24e6e4e932b309a6741ffab6f0d17b9f"
	I1217 20:14:32.242267  496710 cri.go:89] found id: ""
	I1217 20:14:32.242321  496710 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:14:32.258077  496710 out.go:203] 
	W1217 20:14:32.260914  496710 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:14:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:14:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 20:14:32.260942  496710 out.go:285] * 
	* 
	W1217 20:14:32.266584  496710 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:14:32.269454  496710 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-052340 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.61s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-dfn99" [d8c08ec1-4c83-4493-b213-294824acbeca] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009272576s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-052340 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-052340 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (393.748534ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:14:28.498621  496126 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:14:28.499478  496126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:14:28.499499  496126 out.go:374] Setting ErrFile to fd 2...
	I1217 20:14:28.499506  496126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:14:28.499852  496126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:14:28.500168  496126 mustload.go:66] Loading cluster: addons-052340
	I1217 20:14:28.500559  496126 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:14:28.500581  496126 addons.go:622] checking whether the cluster is paused
	I1217 20:14:28.500702  496126 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:14:28.500718  496126 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:14:28.501276  496126 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:14:28.519371  496126 ssh_runner.go:195] Run: systemctl --version
	I1217 20:14:28.519432  496126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:14:28.546964  496126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:14:28.713758  496126 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:14:28.713848  496126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:14:28.772468  496126 cri.go:89] found id: "a40f8c4c9d66754b6dd5a9559839fcf1fe8c00a06c71ce2f09926993d2c3d9eb"
	I1217 20:14:28.772498  496126 cri.go:89] found id: "5b40645a2f29661a7d2787522f23071b2c894de3a6b5b7638a2111a3f9485374"
	I1217 20:14:28.772503  496126 cri.go:89] found id: "4a7c07a0b97549aa9a9ce8e8e19127c9c8cf4e24abdccd8f61dcd057b97c9d88"
	I1217 20:14:28.772512  496126 cri.go:89] found id: "d3baa47458bc085e8d55815cf0497461a119e95081ce45cce1d577901a7dbf8d"
	I1217 20:14:28.772517  496126 cri.go:89] found id: "fcc9a7828f6b875df447059eabaf032b1c2d17e3c003bd1948f528d321bd3c85"
	I1217 20:14:28.772521  496126 cri.go:89] found id: "765f61bbb3ba86834abad3ede0a99451b487e6dc73b004c63a20a8f02e1d84a5"
	I1217 20:14:28.772524  496126 cri.go:89] found id: "d33875f7a8b74239db487ffa04bcf41c083f1ad65f29c43e2ae8b0dd783b521c"
	I1217 20:14:28.772527  496126 cri.go:89] found id: "79a9e5f9433489a03f4b2296098c887736f30cde921105d6ffa972cf7406abb4"
	I1217 20:14:28.772529  496126 cri.go:89] found id: "8d7f5c62629eb9e2812369690fb943f4e57ef24985d574b704ffc9b79a56d549"
	I1217 20:14:28.772540  496126 cri.go:89] found id: "af528db69b5d34f2ebcd038272fbfe9866e82f22a612276fb737683d83c256b9"
	I1217 20:14:28.772547  496126 cri.go:89] found id: "028c23d163f91b2b1e5d8071a70c7fb133ac203b414302242719cd40dba8d733"
	I1217 20:14:28.772550  496126 cri.go:89] found id: "85713e1610062bb4691b694312bf4faa71e96232dd94bc9fc564053646cde3a0"
	I1217 20:14:28.772553  496126 cri.go:89] found id: "cb82734cbf7f969e92c5a0b06400eb1d90d484748c850c4888051348e720776e"
	I1217 20:14:28.772556  496126 cri.go:89] found id: "f60e3e143ad43d5768067a6b27a4ece464edd1136c6931405c68bea040dd1097"
	I1217 20:14:28.772562  496126 cri.go:89] found id: "18d9f4acabfa7df7f42816d2545211aeda7f7362f2af1f80e188ea780addc6f8"
	I1217 20:14:28.772567  496126 cri.go:89] found id: "fead2bfafa7367c08236e1dd9040e848a879a067c3b70a78578d71429854ebf6"
	I1217 20:14:28.772570  496126 cri.go:89] found id: "2b6e269a93d8bccdf1b28bd4369628c766dbf092b4be199c7da9d8d756358c68"
	I1217 20:14:28.772574  496126 cri.go:89] found id: "7fd15b6b594715c56d64ee54d7a6852f04c749cfdf1c08a4bcf60d7a2e38d3b6"
	I1217 20:14:28.772576  496126 cri.go:89] found id: "6311d7d7f6f04d57961651f5acfdebe6792efa3db55df12e18440d726be5abcf"
	I1217 20:14:28.772579  496126 cri.go:89] found id: "6481ede2a21b9816559407d8564d57427a11b523832053f995eb07f4d1ce83bc"
	I1217 20:14:28.772590  496126 cri.go:89] found id: "873499eab93d6e655f10d1bf2c19abe29efc74e958b7418723b5c7c02699e059"
	I1217 20:14:28.772593  496126 cri.go:89] found id: "45a4f23a594c0e4311b2eac64abad5f26d74229e5aa0a19e11b7b7c18f0e227d"
	I1217 20:14:28.772599  496126 cri.go:89] found id: "dd0351330b604610224568f93b842fec24e6e4e932b309a6741ffab6f0d17b9f"
	I1217 20:14:28.772602  496126 cri.go:89] found id: ""
	I1217 20:14:28.772668  496126 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:14:28.792357  496126 out.go:203] 
	W1217 20:14:28.795541  496126 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:14:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:14:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 20:14:28.795624  496126 out.go:285] * 
	* 
	W1217 20:14:28.803105  496126 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:14:28.813369  496126 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-052340 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.41s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.54s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-052340 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-052340 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-052340 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [b07ebbc5-4394-4f20-8880-7d2c85846d1e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [b07ebbc5-4394-4f20-8880-7d2c85846d1e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [b07ebbc5-4394-4f20-8880-7d2c85846d1e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.002900152s
addons_test.go:969: (dbg) Run:  kubectl --context addons-052340 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-052340 ssh "cat /opt/local-path-provisioner/pvc-daf0f2d6-ee80-4a73-a7d9-dc14f656f00d_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-052340 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-052340 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-052340 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-052340 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (332.740885ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:14:28.377314  496105 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:14:28.378152  496105 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:14:28.378200  496105 out.go:374] Setting ErrFile to fd 2...
	I1217 20:14:28.378220  496105 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:14:28.378495  496105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:14:28.378836  496105 mustload.go:66] Loading cluster: addons-052340
	I1217 20:14:28.379259  496105 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:14:28.379303  496105 addons.go:622] checking whether the cluster is paused
	I1217 20:14:28.379434  496105 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:14:28.379469  496105 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:14:28.380064  496105 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:14:28.398760  496105 ssh_runner.go:195] Run: systemctl --version
	I1217 20:14:28.398825  496105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:14:28.417899  496105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:14:28.531258  496105 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:14:28.531347  496105 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:14:28.618508  496105 cri.go:89] found id: "a40f8c4c9d66754b6dd5a9559839fcf1fe8c00a06c71ce2f09926993d2c3d9eb"
	I1217 20:14:28.618529  496105 cri.go:89] found id: "5b40645a2f29661a7d2787522f23071b2c894de3a6b5b7638a2111a3f9485374"
	I1217 20:14:28.618534  496105 cri.go:89] found id: "4a7c07a0b97549aa9a9ce8e8e19127c9c8cf4e24abdccd8f61dcd057b97c9d88"
	I1217 20:14:28.618537  496105 cri.go:89] found id: "d3baa47458bc085e8d55815cf0497461a119e95081ce45cce1d577901a7dbf8d"
	I1217 20:14:28.618541  496105 cri.go:89] found id: "fcc9a7828f6b875df447059eabaf032b1c2d17e3c003bd1948f528d321bd3c85"
	I1217 20:14:28.618548  496105 cri.go:89] found id: "765f61bbb3ba86834abad3ede0a99451b487e6dc73b004c63a20a8f02e1d84a5"
	I1217 20:14:28.618555  496105 cri.go:89] found id: "d33875f7a8b74239db487ffa04bcf41c083f1ad65f29c43e2ae8b0dd783b521c"
	I1217 20:14:28.618559  496105 cri.go:89] found id: "79a9e5f9433489a03f4b2296098c887736f30cde921105d6ffa972cf7406abb4"
	I1217 20:14:28.618562  496105 cri.go:89] found id: "8d7f5c62629eb9e2812369690fb943f4e57ef24985d574b704ffc9b79a56d549"
	I1217 20:14:28.618571  496105 cri.go:89] found id: "af528db69b5d34f2ebcd038272fbfe9866e82f22a612276fb737683d83c256b9"
	I1217 20:14:28.618574  496105 cri.go:89] found id: "028c23d163f91b2b1e5d8071a70c7fb133ac203b414302242719cd40dba8d733"
	I1217 20:14:28.618577  496105 cri.go:89] found id: "85713e1610062bb4691b694312bf4faa71e96232dd94bc9fc564053646cde3a0"
	I1217 20:14:28.618580  496105 cri.go:89] found id: "cb82734cbf7f969e92c5a0b06400eb1d90d484748c850c4888051348e720776e"
	I1217 20:14:28.618583  496105 cri.go:89] found id: "f60e3e143ad43d5768067a6b27a4ece464edd1136c6931405c68bea040dd1097"
	I1217 20:14:28.618585  496105 cri.go:89] found id: "18d9f4acabfa7df7f42816d2545211aeda7f7362f2af1f80e188ea780addc6f8"
	I1217 20:14:28.618590  496105 cri.go:89] found id: "fead2bfafa7367c08236e1dd9040e848a879a067c3b70a78578d71429854ebf6"
	I1217 20:14:28.618593  496105 cri.go:89] found id: "2b6e269a93d8bccdf1b28bd4369628c766dbf092b4be199c7da9d8d756358c68"
	I1217 20:14:28.618596  496105 cri.go:89] found id: "7fd15b6b594715c56d64ee54d7a6852f04c749cfdf1c08a4bcf60d7a2e38d3b6"
	I1217 20:14:28.618599  496105 cri.go:89] found id: "6311d7d7f6f04d57961651f5acfdebe6792efa3db55df12e18440d726be5abcf"
	I1217 20:14:28.618602  496105 cri.go:89] found id: "6481ede2a21b9816559407d8564d57427a11b523832053f995eb07f4d1ce83bc"
	I1217 20:14:28.618608  496105 cri.go:89] found id: "873499eab93d6e655f10d1bf2c19abe29efc74e958b7418723b5c7c02699e059"
	I1217 20:14:28.618611  496105 cri.go:89] found id: "45a4f23a594c0e4311b2eac64abad5f26d74229e5aa0a19e11b7b7c18f0e227d"
	I1217 20:14:28.618614  496105 cri.go:89] found id: "dd0351330b604610224568f93b842fec24e6e4e932b309a6741ffab6f0d17b9f"
	I1217 20:14:28.618617  496105 cri.go:89] found id: ""
	I1217 20:14:28.618667  496105 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:14:28.644519  496105 out.go:203] 
	W1217 20:14:28.647892  496105 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:14:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:14:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 20:14:28.647973  496105 out.go:285] * 
	* 
	W1217 20:14:28.653610  496105 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:14:28.657120  496105 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-052340 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.54s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.35s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-b7cpw" [f3e2f332-3129-49a8-8cb2-5dedaf7c64e2] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004366293s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-052340 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-052340 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (347.027327ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:14:18.849509  495643 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:14:18.850175  495643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:14:18.850192  495643 out.go:374] Setting ErrFile to fd 2...
	I1217 20:14:18.850197  495643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:14:18.850453  495643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:14:18.850805  495643 mustload.go:66] Loading cluster: addons-052340
	I1217 20:14:18.851207  495643 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:14:18.851227  495643 addons.go:622] checking whether the cluster is paused
	I1217 20:14:18.851335  495643 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:14:18.851351  495643 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:14:18.851980  495643 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:14:18.871177  495643 ssh_runner.go:195] Run: systemctl --version
	I1217 20:14:18.871238  495643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:14:18.906458  495643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:14:19.003564  495643 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:14:19.003742  495643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:14:19.080426  495643 cri.go:89] found id: "a40f8c4c9d66754b6dd5a9559839fcf1fe8c00a06c71ce2f09926993d2c3d9eb"
	I1217 20:14:19.080453  495643 cri.go:89] found id: "5b40645a2f29661a7d2787522f23071b2c894de3a6b5b7638a2111a3f9485374"
	I1217 20:14:19.080459  495643 cri.go:89] found id: "4a7c07a0b97549aa9a9ce8e8e19127c9c8cf4e24abdccd8f61dcd057b97c9d88"
	I1217 20:14:19.080467  495643 cri.go:89] found id: "d3baa47458bc085e8d55815cf0497461a119e95081ce45cce1d577901a7dbf8d"
	I1217 20:14:19.080472  495643 cri.go:89] found id: "fcc9a7828f6b875df447059eabaf032b1c2d17e3c003bd1948f528d321bd3c85"
	I1217 20:14:19.080476  495643 cri.go:89] found id: "765f61bbb3ba86834abad3ede0a99451b487e6dc73b004c63a20a8f02e1d84a5"
	I1217 20:14:19.080479  495643 cri.go:89] found id: "d33875f7a8b74239db487ffa04bcf41c083f1ad65f29c43e2ae8b0dd783b521c"
	I1217 20:14:19.080482  495643 cri.go:89] found id: "79a9e5f9433489a03f4b2296098c887736f30cde921105d6ffa972cf7406abb4"
	I1217 20:14:19.080485  495643 cri.go:89] found id: "8d7f5c62629eb9e2812369690fb943f4e57ef24985d574b704ffc9b79a56d549"
	I1217 20:14:19.080491  495643 cri.go:89] found id: "af528db69b5d34f2ebcd038272fbfe9866e82f22a612276fb737683d83c256b9"
	I1217 20:14:19.080495  495643 cri.go:89] found id: "028c23d163f91b2b1e5d8071a70c7fb133ac203b414302242719cd40dba8d733"
	I1217 20:14:19.080498  495643 cri.go:89] found id: "85713e1610062bb4691b694312bf4faa71e96232dd94bc9fc564053646cde3a0"
	I1217 20:14:19.080502  495643 cri.go:89] found id: "cb82734cbf7f969e92c5a0b06400eb1d90d484748c850c4888051348e720776e"
	I1217 20:14:19.080505  495643 cri.go:89] found id: "f60e3e143ad43d5768067a6b27a4ece464edd1136c6931405c68bea040dd1097"
	I1217 20:14:19.080508  495643 cri.go:89] found id: "18d9f4acabfa7df7f42816d2545211aeda7f7362f2af1f80e188ea780addc6f8"
	I1217 20:14:19.080513  495643 cri.go:89] found id: "fead2bfafa7367c08236e1dd9040e848a879a067c3b70a78578d71429854ebf6"
	I1217 20:14:19.080516  495643 cri.go:89] found id: "2b6e269a93d8bccdf1b28bd4369628c766dbf092b4be199c7da9d8d756358c68"
	I1217 20:14:19.080520  495643 cri.go:89] found id: "7fd15b6b594715c56d64ee54d7a6852f04c749cfdf1c08a4bcf60d7a2e38d3b6"
	I1217 20:14:19.080523  495643 cri.go:89] found id: "6311d7d7f6f04d57961651f5acfdebe6792efa3db55df12e18440d726be5abcf"
	I1217 20:14:19.080526  495643 cri.go:89] found id: "6481ede2a21b9816559407d8564d57427a11b523832053f995eb07f4d1ce83bc"
	I1217 20:14:19.080531  495643 cri.go:89] found id: "873499eab93d6e655f10d1bf2c19abe29efc74e958b7418723b5c7c02699e059"
	I1217 20:14:19.080540  495643 cri.go:89] found id: "45a4f23a594c0e4311b2eac64abad5f26d74229e5aa0a19e11b7b7c18f0e227d"
	I1217 20:14:19.080543  495643 cri.go:89] found id: "dd0351330b604610224568f93b842fec24e6e4e932b309a6741ffab6f0d17b9f"
	I1217 20:14:19.080546  495643 cri.go:89] found id: ""
	I1217 20:14:19.080609  495643 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:14:19.109646  495643 out.go:203] 
	W1217 20:14:19.112944  495643 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:14:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:14:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 20:14:19.112986  495643 out.go:285] * 
	* 
	W1217 20:14:19.118521  495643 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:14:19.121667  495643 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-052340 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.35s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-pgp6s" [1842f34d-21bb-4f09-94d3-500d1e9d1805] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00308087s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-052340 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-052340 addons disable yakd --alsologtostderr -v=1: exit status 11 (271.290684ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:14:12.555457  495565 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:14:12.556247  495565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:14:12.556270  495565 out.go:374] Setting ErrFile to fd 2...
	I1217 20:14:12.556277  495565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:14:12.556734  495565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:14:12.557128  495565 mustload.go:66] Loading cluster: addons-052340
	I1217 20:14:12.557780  495565 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:14:12.557800  495565 addons.go:622] checking whether the cluster is paused
	I1217 20:14:12.557933  495565 config.go:182] Loaded profile config "addons-052340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:14:12.557951  495565 host.go:66] Checking if "addons-052340" exists ...
	I1217 20:14:12.558912  495565 cli_runner.go:164] Run: docker container inspect addons-052340 --format={{.State.Status}}
	I1217 20:14:12.576735  495565 ssh_runner.go:195] Run: systemctl --version
	I1217 20:14:12.576790  495565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052340
	I1217 20:14:12.596148  495565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/addons-052340/id_rsa Username:docker}
	I1217 20:14:12.691974  495565 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:14:12.692060  495565 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:14:12.735941  495565 cri.go:89] found id: "a40f8c4c9d66754b6dd5a9559839fcf1fe8c00a06c71ce2f09926993d2c3d9eb"
	I1217 20:14:12.735967  495565 cri.go:89] found id: "5b40645a2f29661a7d2787522f23071b2c894de3a6b5b7638a2111a3f9485374"
	I1217 20:14:12.735972  495565 cri.go:89] found id: "4a7c07a0b97549aa9a9ce8e8e19127c9c8cf4e24abdccd8f61dcd057b97c9d88"
	I1217 20:14:12.735976  495565 cri.go:89] found id: "d3baa47458bc085e8d55815cf0497461a119e95081ce45cce1d577901a7dbf8d"
	I1217 20:14:12.735979  495565 cri.go:89] found id: "fcc9a7828f6b875df447059eabaf032b1c2d17e3c003bd1948f528d321bd3c85"
	I1217 20:14:12.735983  495565 cri.go:89] found id: "765f61bbb3ba86834abad3ede0a99451b487e6dc73b004c63a20a8f02e1d84a5"
	I1217 20:14:12.735986  495565 cri.go:89] found id: "d33875f7a8b74239db487ffa04bcf41c083f1ad65f29c43e2ae8b0dd783b521c"
	I1217 20:14:12.735989  495565 cri.go:89] found id: "79a9e5f9433489a03f4b2296098c887736f30cde921105d6ffa972cf7406abb4"
	I1217 20:14:12.735992  495565 cri.go:89] found id: "8d7f5c62629eb9e2812369690fb943f4e57ef24985d574b704ffc9b79a56d549"
	I1217 20:14:12.735999  495565 cri.go:89] found id: "af528db69b5d34f2ebcd038272fbfe9866e82f22a612276fb737683d83c256b9"
	I1217 20:14:12.736002  495565 cri.go:89] found id: "028c23d163f91b2b1e5d8071a70c7fb133ac203b414302242719cd40dba8d733"
	I1217 20:14:12.736006  495565 cri.go:89] found id: "85713e1610062bb4691b694312bf4faa71e96232dd94bc9fc564053646cde3a0"
	I1217 20:14:12.736013  495565 cri.go:89] found id: "cb82734cbf7f969e92c5a0b06400eb1d90d484748c850c4888051348e720776e"
	I1217 20:14:12.736020  495565 cri.go:89] found id: "f60e3e143ad43d5768067a6b27a4ece464edd1136c6931405c68bea040dd1097"
	I1217 20:14:12.736023  495565 cri.go:89] found id: "18d9f4acabfa7df7f42816d2545211aeda7f7362f2af1f80e188ea780addc6f8"
	I1217 20:14:12.736029  495565 cri.go:89] found id: "fead2bfafa7367c08236e1dd9040e848a879a067c3b70a78578d71429854ebf6"
	I1217 20:14:12.736037  495565 cri.go:89] found id: "2b6e269a93d8bccdf1b28bd4369628c766dbf092b4be199c7da9d8d756358c68"
	I1217 20:14:12.736042  495565 cri.go:89] found id: "7fd15b6b594715c56d64ee54d7a6852f04c749cfdf1c08a4bcf60d7a2e38d3b6"
	I1217 20:14:12.736046  495565 cri.go:89] found id: "6311d7d7f6f04d57961651f5acfdebe6792efa3db55df12e18440d726be5abcf"
	I1217 20:14:12.736049  495565 cri.go:89] found id: "6481ede2a21b9816559407d8564d57427a11b523832053f995eb07f4d1ce83bc"
	I1217 20:14:12.736054  495565 cri.go:89] found id: "873499eab93d6e655f10d1bf2c19abe29efc74e958b7418723b5c7c02699e059"
	I1217 20:14:12.736057  495565 cri.go:89] found id: "45a4f23a594c0e4311b2eac64abad5f26d74229e5aa0a19e11b7b7c18f0e227d"
	I1217 20:14:12.736060  495565 cri.go:89] found id: "dd0351330b604610224568f93b842fec24e6e4e932b309a6741ffab6f0d17b9f"
	I1217 20:14:12.736063  495565 cri.go:89] found id: ""
	I1217 20:14:12.736113  495565 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:14:12.756631  495565 out.go:203] 
	W1217 20:14:12.759616  495565 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:14:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:14:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 20:14:12.759646  495565 out.go:285] * 
	* 
	W1217 20:14:12.765188  495565 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:14:12.769107  495565 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-052340 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (499.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-655452 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1217 20:21:40.525730  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:23:56.663847  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:24:24.367943  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:30.854018  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:30.860488  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:30.872018  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:30.893481  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:30.934948  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:31.016456  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:31.178058  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:31.499807  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:32.141872  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:33.424252  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:35.987132  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:41.109238  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:51.351415  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:26:11.832879  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:26:52.794299  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:28:14.715785  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:28:56.663806  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-655452 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m17.78363624s)

                                                
                                                
-- stdout --
	* [functional-655452] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-655452" primary control-plane node in "functional-655452" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Found network options:
	  - HTTP_PROXY=localhost:34705
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:34705 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-655452 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-655452 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001110167s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001034888s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001034888s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-655452 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-655452
helpers_test.go:244: (dbg) docker inspect functional-655452:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	        "Created": "2025-12-17T20:21:23.127111799Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 517339,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:21:23.205943598Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hosts",
	        "LogPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f-json.log",
	        "Name": "/functional-655452",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-655452:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-655452",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	                "LowerDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8-init/diff:/var/lib/docker/overlay2/c4759b344ecb109c83d66e4ef56c76903f1d1e597efb86ab2d2c7911b1130a8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-655452",
	                "Source": "/var/lib/docker/volumes/functional-655452/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-655452",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-655452",
	                "name.minikube.sigs.k8s.io": "functional-655452",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "77ae7dbcf69b3201be4ce65a88d235dbad078142318e01a5939425f9766fb924",
	            "SandboxKey": "/var/run/docker/netns/77ae7dbcf69b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-655452": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:c6:98:b5:58:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b68cd6e69ccdb81b0870dd976e2d5401ca696480842d2c469293121d59435cfb",
	                    "EndpointID": "82926bb4b08b5f66131c1026a8bc72a582e9cb8c6d97a278a494965923f47cb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-655452",
	                        "ce7a6a54d14f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452: exit status 6 (306.25139ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 20:29:36.091655  522542 status.go:458] kubeconfig endpoint: get endpoint: "functional-655452" does not appear in /home/jenkins/minikube-integration/21808-485134/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-643319 /tmp/TestFunctionalparallelMountCmdspecific-port2571748949/001:/mount-9p --alsologtostderr -v=1 --port 46464               │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ ssh            │ functional-643319 ssh findmnt -T /mount-9p | grep 9p                                                                                            │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ ssh            │ functional-643319 ssh findmnt -T /mount-9p | grep 9p                                                                                            │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ ssh            │ functional-643319 ssh -- ls -la /mount-9p                                                                                                       │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ ssh            │ functional-643319 ssh sudo umount -f /mount-9p                                                                                                  │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ mount          │ -p functional-643319 /tmp/TestFunctionalparallelMountCmdVerifyCleanup781964025/001:/mount2 --alsologtostderr -v=1                               │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ mount          │ -p functional-643319 /tmp/TestFunctionalparallelMountCmdVerifyCleanup781964025/001:/mount1 --alsologtostderr -v=1                               │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ mount          │ -p functional-643319 /tmp/TestFunctionalparallelMountCmdVerifyCleanup781964025/001:/mount3 --alsologtostderr -v=1                               │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ ssh            │ functional-643319 ssh findmnt -T /mount1                                                                                                        │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ ssh            │ functional-643319 ssh findmnt -T /mount1                                                                                                        │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ ssh            │ functional-643319 ssh findmnt -T /mount2                                                                                                        │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ ssh            │ functional-643319 ssh findmnt -T /mount3                                                                                                        │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ mount          │ -p functional-643319 --kill=true                                                                                                                │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ update-context │ functional-643319 update-context --alsologtostderr -v=2                                                                                         │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ update-context │ functional-643319 update-context --alsologtostderr -v=2                                                                                         │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ update-context │ functional-643319 update-context --alsologtostderr -v=2                                                                                         │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image          │ functional-643319 image ls --format short --alsologtostderr                                                                                     │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image          │ functional-643319 image ls --format yaml --alsologtostderr                                                                                      │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ ssh            │ functional-643319 ssh pgrep buildkitd                                                                                                           │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ image          │ functional-643319 image build -t localhost/my-image:functional-643319 testdata/build --alsologtostderr                                          │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image          │ functional-643319 image ls --format json --alsologtostderr                                                                                      │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image          │ functional-643319 image ls --format table --alsologtostderr                                                                                     │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image          │ functional-643319 image ls                                                                                                                      │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ delete         │ -p functional-643319                                                                                                                            │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ start          │ -p functional-655452 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:21:18
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:21:18.049096  516955 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:21:18.049239  516955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:21:18.049244  516955 out.go:374] Setting ErrFile to fd 2...
	I1217 20:21:18.049248  516955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:21:18.049510  516955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:21:18.049945  516955 out.go:368] Setting JSON to false
	I1217 20:21:18.050784  516955 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11027,"bootTime":1765991851,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:21:18.050850  516955 start.go:143] virtualization:  
	I1217 20:21:18.052664  516955 out.go:179] * [functional-655452] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:21:18.054360  516955 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:21:18.054457  516955 notify.go:221] Checking for updates...
	I1217 20:21:18.056871  516955 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:21:18.058416  516955 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:21:18.059670  516955 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:21:18.060813  516955 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:21:18.061965  516955 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:21:18.063426  516955 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:21:18.086905  516955 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:21:18.087042  516955 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:21:18.147990  516955 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-17 20:21:18.137858406 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:21:18.148087  516955 docker.go:319] overlay module found
	I1217 20:21:18.149817  516955 out.go:179] * Using the docker driver based on user configuration
	I1217 20:21:18.151034  516955 start.go:309] selected driver: docker
	I1217 20:21:18.151043  516955 start.go:927] validating driver "docker" against <nil>
	I1217 20:21:18.151055  516955 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:21:18.151875  516955 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:21:18.213956  516955 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-17 20:21:18.204676941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:21:18.214097  516955 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 20:21:18.214324  516955 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:21:18.215743  516955 out.go:179] * Using Docker driver with root privileges
	I1217 20:21:18.216875  516955 cni.go:84] Creating CNI manager for ""
	I1217 20:21:18.216927  516955 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:21:18.216935  516955 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 20:21:18.217014  516955 start.go:353] cluster config:
	{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:21:18.218422  516955 out.go:179] * Starting "functional-655452" primary control-plane node in "functional-655452" cluster
	I1217 20:21:18.219482  516955 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:21:18.220774  516955 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:21:18.221903  516955 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:21:18.221940  516955 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1217 20:21:18.221948  516955 cache.go:65] Caching tarball of preloaded images
	I1217 20:21:18.222038  516955 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:21:18.222048  516955 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 20:21:18.222399  516955 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/config.json ...
	I1217 20:21:18.222422  516955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/config.json: {Name:mk573c5766d8e8b13d02ffb912d268a01e302ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:21:18.222580  516955 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:21:18.243176  516955 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:21:18.243188  516955 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:21:18.243210  516955 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:21:18.243241  516955 start.go:360] acquireMachinesLock for functional-655452: {Name:mk8b480106227149e1b79da3c4f580b9dc5e0f96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:21:18.243354  516955 start.go:364] duration metric: took 97.133µs to acquireMachinesLock for "functional-655452"
	I1217 20:21:18.243386  516955 start.go:93] Provisioning new machine with config: &{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:21:18.243450  516955 start.go:125] createHost starting for "" (driver="docker")
	I1217 20:21:18.245128  516955 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1217 20:21:18.245416  516955 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:34705 to docker env.
	I1217 20:21:18.245441  516955 start.go:159] libmachine.API.Create for "functional-655452" (driver="docker")
	I1217 20:21:18.245471  516955 client.go:173] LocalClient.Create starting
	I1217 20:21:18.245526  516955 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem
	I1217 20:21:18.245562  516955 main.go:143] libmachine: Decoding PEM data...
	I1217 20:21:18.245575  516955 main.go:143] libmachine: Parsing certificate...
	I1217 20:21:18.245636  516955 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem
	I1217 20:21:18.245658  516955 main.go:143] libmachine: Decoding PEM data...
	I1217 20:21:18.245668  516955 main.go:143] libmachine: Parsing certificate...
	I1217 20:21:18.246029  516955 cli_runner.go:164] Run: docker network inspect functional-655452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 20:21:18.262682  516955 cli_runner.go:211] docker network inspect functional-655452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 20:21:18.262755  516955 network_create.go:284] running [docker network inspect functional-655452] to gather additional debugging logs...
	I1217 20:21:18.262770  516955 cli_runner.go:164] Run: docker network inspect functional-655452
	W1217 20:21:18.281921  516955 cli_runner.go:211] docker network inspect functional-655452 returned with exit code 1
	I1217 20:21:18.281940  516955 network_create.go:287] error running [docker network inspect functional-655452]: docker network inspect functional-655452: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-655452 not found
	I1217 20:21:18.281965  516955 network_create.go:289] output of [docker network inspect functional-655452]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-655452 not found
	
	** /stderr **
	I1217 20:21:18.282068  516955 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:21:18.301491  516955 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019118d0}
	I1217 20:21:18.301527  516955 network_create.go:124] attempt to create docker network functional-655452 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1217 20:21:18.301580  516955 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-655452 functional-655452
	I1217 20:21:18.356547  516955 network_create.go:108] docker network functional-655452 192.168.49.0/24 created
	I1217 20:21:18.356570  516955 kic.go:121] calculated static IP "192.168.49.2" for the "functional-655452" container
	I1217 20:21:18.356643  516955 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 20:21:18.371932  516955 cli_runner.go:164] Run: docker volume create functional-655452 --label name.minikube.sigs.k8s.io=functional-655452 --label created_by.minikube.sigs.k8s.io=true
	I1217 20:21:18.389236  516955 oci.go:103] Successfully created a docker volume functional-655452
	I1217 20:21:18.389311  516955 cli_runner.go:164] Run: docker run --rm --name functional-655452-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-655452 --entrypoint /usr/bin/test -v functional-655452:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 20:21:18.856426  516955 oci.go:107] Successfully prepared a docker volume functional-655452
	I1217 20:21:18.856477  516955 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:21:18.856485  516955 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 20:21:18.856564  516955 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-655452:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 20:21:23.059802  516955 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-655452:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.203202412s)
	I1217 20:21:23.059824  516955 kic.go:203] duration metric: took 4.203334926s to extract preloaded images to volume ...
	W1217 20:21:23.059978  516955 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 20:21:23.060084  516955 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 20:21:23.112241  516955 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-655452 --name functional-655452 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-655452 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-655452 --network functional-655452 --ip 192.168.49.2 --volume functional-655452:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 20:21:23.447972  516955 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Running}}
	I1217 20:21:23.480138  516955 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:21:23.505383  516955 cli_runner.go:164] Run: docker exec functional-655452 stat /var/lib/dpkg/alternatives/iptables
	I1217 20:21:23.555694  516955 oci.go:144] the created container "functional-655452" has a running status.
	I1217 20:21:23.555716  516955 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa...
	I1217 20:21:24.048706  516955 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 20:21:24.085580  516955 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:21:24.115279  516955 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 20:21:24.115290  516955 kic_runner.go:114] Args: [docker exec --privileged functional-655452 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 20:21:24.167238  516955 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:21:24.188146  516955 machine.go:94] provisionDockerMachine start ...
	I1217 20:21:24.188236  516955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:21:24.215838  516955 main.go:143] libmachine: Using SSH client type: native
	I1217 20:21:24.216168  516955 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:21:24.216182  516955 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:21:24.383690  516955 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-655452
	
	I1217 20:21:24.383761  516955 ubuntu.go:182] provisioning hostname "functional-655452"
	I1217 20:21:24.383835  516955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:21:24.406204  516955 main.go:143] libmachine: Using SSH client type: native
	I1217 20:21:24.406529  516955 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:21:24.406539  516955 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-655452 && echo "functional-655452" | sudo tee /etc/hostname
	I1217 20:21:24.571358  516955 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-655452
	
	I1217 20:21:24.571437  516955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:21:24.595273  516955 main.go:143] libmachine: Using SSH client type: native
	I1217 20:21:24.595789  516955 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:21:24.595827  516955 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-655452' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-655452/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-655452' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:21:24.744012  516955 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:21:24.744028  516955 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:21:24.744057  516955 ubuntu.go:190] setting up certificates
	I1217 20:21:24.744065  516955 provision.go:84] configureAuth start
	I1217 20:21:24.744124  516955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-655452
	I1217 20:21:24.761219  516955 provision.go:143] copyHostCerts
	I1217 20:21:24.761281  516955 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:21:24.761289  516955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:21:24.761368  516955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:21:24.761458  516955 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:21:24.761462  516955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:21:24.761486  516955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:21:24.761532  516955 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:21:24.761536  516955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:21:24.761557  516955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:21:24.761598  516955 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.functional-655452 san=[127.0.0.1 192.168.49.2 functional-655452 localhost minikube]
	I1217 20:21:25.152346  516955 provision.go:177] copyRemoteCerts
	I1217 20:21:25.152399  516955 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:21:25.152439  516955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:21:25.173566  516955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:21:25.268146  516955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:21:25.286879  516955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 20:21:25.305445  516955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:21:25.323647  516955 provision.go:87] duration metric: took 579.561017ms to configureAuth
	I1217 20:21:25.323664  516955 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:21:25.323862  516955 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:21:25.323979  516955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:21:25.343874  516955 main.go:143] libmachine: Using SSH client type: native
	I1217 20:21:25.344228  516955 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:21:25.344242  516955 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:21:25.635051  516955 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:21:25.635067  516955 machine.go:97] duration metric: took 1.446909626s to provisionDockerMachine
	I1217 20:21:25.635077  516955 client.go:176] duration metric: took 7.38960241s to LocalClient.Create
	I1217 20:21:25.635095  516955 start.go:167] duration metric: took 7.389653347s to libmachine.API.Create "functional-655452"
	I1217 20:21:25.635101  516955 start.go:293] postStartSetup for "functional-655452" (driver="docker")
	I1217 20:21:25.635111  516955 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:21:25.635178  516955 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:21:25.635218  516955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:21:25.658850  516955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:21:25.755956  516955 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:21:25.759538  516955 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:21:25.759556  516955 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:21:25.759568  516955 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:21:25.759661  516955 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:21:25.759755  516955 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:21:25.759833  516955 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts -> hosts in /etc/test/nested/copy/488412
	I1217 20:21:25.759877  516955 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/488412
	I1217 20:21:25.768004  516955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:21:25.786218  516955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts --> /etc/test/nested/copy/488412/hosts (40 bytes)
	I1217 20:21:25.805058  516955 start.go:296] duration metric: took 169.943565ms for postStartSetup
	I1217 20:21:25.805421  516955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-655452
	I1217 20:21:25.822492  516955 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/config.json ...
	I1217 20:21:25.822767  516955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:21:25.822810  516955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:21:25.840478  516955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:21:25.932930  516955 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:21:25.937872  516955 start.go:128] duration metric: took 7.694408896s to createHost
	I1217 20:21:25.937887  516955 start.go:83] releasing machines lock for "functional-655452", held for 7.694526321s
	I1217 20:21:25.937973  516955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-655452
	I1217 20:21:25.959319  516955 out.go:179] * Found network options:
	I1217 20:21:25.962328  516955 out.go:179]   - HTTP_PROXY=localhost:34705
	W1217 20:21:25.965289  516955 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1217 20:21:25.968165  516955 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1217 20:21:25.971164  516955 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:21:25.971219  516955 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:21:25.971227  516955 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:21:25.971257  516955 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:21:25.971284  516955 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:21:25.971308  516955 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:21:25.971359  516955 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:21:25.971425  516955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:21:25.971482  516955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:21:25.988991  516955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:21:26.101732  516955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:21:26.119595  516955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:21:26.137829  516955 ssh_runner.go:195] Run: openssl version
	I1217 20:21:26.144221  516955 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:21:26.151531  516955 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:21:26.159498  516955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:21:26.163517  516955 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:21:26.163575  516955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:21:26.204317  516955 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:21:26.211473  516955 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4884122.pem /etc/ssl/certs/3ec20f2e.0
	I1217 20:21:26.218521  516955 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:21:26.226165  516955 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:21:26.233245  516955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:21:26.237187  516955 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:21:26.237246  516955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:21:26.278139  516955 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:21:26.285378  516955 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 20:21:26.292492  516955 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:21:26.299693  516955 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:21:26.307303  516955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:21:26.310966  516955 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:21:26.311024  516955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:21:26.352817  516955 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:21:26.360050  516955 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/488412.pem /etc/ssl/certs/51391683.0
	I1217 20:21:26.367477  516955 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:21:26.370830  516955 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 20:21:26.374255  516955 ssh_runner.go:195] Run: cat /version.json
	I1217 20:21:26.374318  516955 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:21:26.466254  516955 ssh_runner.go:195] Run: systemctl --version
	I1217 20:21:26.472669  516955 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:21:26.509495  516955 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:21:26.513677  516955 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:21:26.513747  516955 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:21:26.541897  516955 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 20:21:26.541911  516955 start.go:496] detecting cgroup driver to use...
	I1217 20:21:26.541951  516955 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:21:26.542005  516955 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:21:26.558713  516955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:21:26.571505  516955 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:21:26.571677  516955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:21:26.589576  516955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:21:26.608063  516955 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:21:26.726879  516955 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:21:26.862772  516955 docker.go:234] disabling docker service ...
	I1217 20:21:26.862834  516955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:21:26.883923  516955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:21:26.897426  516955 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:21:27.018380  516955 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:21:27.141320  516955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:21:27.154210  516955 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:21:27.168231  516955 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:21:27.168301  516955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:21:27.177877  516955 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:21:27.177938  516955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:21:27.187485  516955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:21:27.197099  516955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:21:27.205734  516955 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:21:27.213873  516955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:21:27.222982  516955 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:21:27.237019  516955 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:21:27.246323  516955 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:21:27.254120  516955 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:21:27.261452  516955 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:21:27.376043  516955 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:21:27.550796  516955 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:21:27.550856  516955 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:21:27.554705  516955 start.go:564] Will wait 60s for crictl version
	I1217 20:21:27.554781  516955 ssh_runner.go:195] Run: which crictl
	I1217 20:21:27.558329  516955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:21:27.581856  516955 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:21:27.581933  516955 ssh_runner.go:195] Run: crio --version
	I1217 20:21:27.611352  516955 ssh_runner.go:195] Run: crio --version
	I1217 20:21:27.647222  516955 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 20:21:27.650064  516955 cli_runner.go:164] Run: docker network inspect functional-655452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:21:27.666429  516955 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:21:27.670519  516955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:21:27.680220  516955 kubeadm.go:884] updating cluster {Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:21:27.680330  516955 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:21:27.680382  516955 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:21:27.715427  516955 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:21:27.715437  516955 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:21:27.715494  516955 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:21:27.742061  516955 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:21:27.742073  516955 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:21:27.742079  516955 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1217 20:21:27.742167  516955 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-655452 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:21:27.742242  516955 ssh_runner.go:195] Run: crio config
	I1217 20:21:27.806352  516955 cni.go:84] Creating CNI manager for ""
	I1217 20:21:27.806362  516955 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:21:27.806371  516955 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:21:27.806392  516955 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-655452 NodeName:functional-655452 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:21:27.806508  516955 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-655452"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:21:27.806579  516955 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 20:21:27.814343  516955 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:21:27.814406  516955 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:21:27.822364  516955 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 20:21:27.835299  516955 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 20:21:27.848611  516955 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1217 20:21:27.862227  516955 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:21:27.866389  516955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:21:27.876774  516955 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:21:28.005248  516955 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:21:28.023255  516955 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452 for IP: 192.168.49.2
	I1217 20:21:28.023267  516955 certs.go:195] generating shared ca certs ...
	I1217 20:21:28.023282  516955 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:21:28.023430  516955 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:21:28.023473  516955 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:21:28.023479  516955 certs.go:257] generating profile certs ...
	I1217 20:21:28.023538  516955 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key
	I1217 20:21:28.023549  516955 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt with IP's: []
	I1217 20:21:28.172183  516955 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt ...
	I1217 20:21:28.172201  516955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: {Name:mkb9cde66dde04928a2401d7b0fcba7a5af9f73e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:21:28.172409  516955 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key ...
	I1217 20:21:28.172416  516955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key: {Name:mke415c576b0966a2d59dd70ec0e22be4cb9d5ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:21:28.172507  516955 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key.aa95dda5
	I1217 20:21:28.172522  516955 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.crt.aa95dda5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1217 20:21:28.451888  516955 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.crt.aa95dda5 ...
	I1217 20:21:28.451904  516955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.crt.aa95dda5: {Name:mk0c1ba95c0657b45964de2e28386c6dd0b1c4dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:21:28.452092  516955 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key.aa95dda5 ...
	I1217 20:21:28.452101  516955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key.aa95dda5: {Name:mkb2f5f2e7f8206844bc0991c458ba8b4f8a40cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:21:28.452190  516955 certs.go:382] copying /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.crt.aa95dda5 -> /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.crt
	I1217 20:21:28.452269  516955 certs.go:386] copying /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key.aa95dda5 -> /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key
	I1217 20:21:28.452320  516955 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key
	I1217 20:21:28.452336  516955 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.crt with IP's: []
	I1217 20:21:28.683553  516955 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.crt ...
	I1217 20:21:28.683569  516955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.crt: {Name:mkb2a4415e9a33fb82fe5f322957e240545a6b85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:21:28.683765  516955 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key ...
	I1217 20:21:28.683775  516955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key: {Name:mk12b099ecb12e3b80bb27869a967d72ea7ac107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:21:28.683977  516955 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:21:28.684021  516955 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:21:28.684029  516955 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:21:28.684054  516955 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:21:28.684079  516955 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:21:28.684104  516955 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:21:28.684154  516955 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:21:28.684723  516955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:21:28.703636  516955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:21:28.721970  516955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:21:28.740802  516955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:21:28.759048  516955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 20:21:28.776785  516955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 20:21:28.795331  516955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:21:28.813399  516955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:21:28.838793  516955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:21:28.857583  516955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:21:28.878900  516955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:21:28.899197  516955 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:21:28.912469  516955 ssh_runner.go:195] Run: openssl version
	I1217 20:21:28.918899  516955 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:21:28.926585  516955 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:21:28.934814  516955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:21:28.938754  516955 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:21:28.938812  516955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:21:28.980280  516955 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:21:28.988101  516955 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:21:28.995815  516955 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:21:29.004869  516955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:21:29.009698  516955 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:21:29.009758  516955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:21:29.052210  516955 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:21:29.060586  516955 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:21:29.068211  516955 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:21:29.075775  516955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:21:29.079966  516955 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:21:29.080040  516955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:21:29.121807  516955 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:21:29.130011  516955 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:21:29.134112  516955 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 20:21:29.134167  516955 kubeadm.go:401] StartCluster: {Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:21:29.134241  516955 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:21:29.134313  516955 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:21:29.167261  516955 cri.go:89] found id: ""
	I1217 20:21:29.167326  516955 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:21:29.175626  516955 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:21:29.183747  516955 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:21:29.183802  516955 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:21:29.191869  516955 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:21:29.191883  516955 kubeadm.go:158] found existing configuration files:
	
	I1217 20:21:29.191942  516955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 20:21:29.200189  516955 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:21:29.200261  516955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:21:29.207706  516955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 20:21:29.215974  516955 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:21:29.216047  516955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:21:29.229397  516955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 20:21:29.239026  516955 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:21:29.239087  516955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:21:29.247184  516955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 20:21:29.255018  516955 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:21:29.255074  516955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:21:29.262697  516955 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:21:29.301540  516955 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 20:21:29.301633  516955 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:21:29.378425  516955 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:21:29.378490  516955 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 20:21:29.378524  516955 kubeadm.go:319] OS: Linux
	I1217 20:21:29.378568  516955 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:21:29.378615  516955 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 20:21:29.378660  516955 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:21:29.378707  516955 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:21:29.378753  516955 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:21:29.378799  516955 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:21:29.378843  516955 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:21:29.378890  516955 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:21:29.378944  516955 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 20:21:29.452413  516955 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:21:29.452536  516955 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:21:29.452624  516955 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:21:29.462356  516955 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:21:29.468359  516955 out.go:252]   - Generating certificates and keys ...
	I1217 20:21:29.468485  516955 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:21:29.468570  516955 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:21:29.782592  516955 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 20:21:30.476054  516955 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 20:21:30.745367  516955 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 20:21:31.122739  516955 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 20:21:31.416978  516955 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 20:21:31.417283  516955 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-655452 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 20:21:31.820526  516955 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 20:21:31.820897  516955 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-655452 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 20:21:31.938309  516955 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 20:21:32.188469  516955 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 20:21:32.306611  516955 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 20:21:32.306946  516955 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:21:32.586215  516955 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:21:32.652425  516955 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:21:32.745292  516955 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:21:33.030760  516955 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:21:33.207824  516955 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:21:33.208684  516955 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:21:33.211555  516955 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 20:21:33.214886  516955 out.go:252]   - Booting up control plane ...
	I1217 20:21:33.214986  516955 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:21:33.215061  516955 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:21:33.215141  516955 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:21:33.232114  516955 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:21:33.232216  516955 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:21:33.240133  516955 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:21:33.240435  516955 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:21:33.240629  516955 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:21:33.375643  516955 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:21:33.375778  516955 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:25:33.374675  516955 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001110167s
	I1217 20:25:33.374704  516955 kubeadm.go:319] 
	I1217 20:25:33.374769  516955 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 20:25:33.374805  516955 kubeadm.go:319] 	- The kubelet is not running
	I1217 20:25:33.374918  516955 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 20:25:33.374925  516955 kubeadm.go:319] 
	I1217 20:25:33.375037  516955 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 20:25:33.375073  516955 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 20:25:33.375108  516955 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 20:25:33.375114  516955 kubeadm.go:319] 
	I1217 20:25:33.379917  516955 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 20:25:33.380361  516955 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 20:25:33.380480  516955 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 20:25:33.380715  516955 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 20:25:33.380721  516955 kubeadm.go:319] 
	I1217 20:25:33.380788  516955 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 20:25:33.380931  516955 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-655452 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-655452 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001110167s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 20:25:33.381019  516955 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 20:25:33.796675  516955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:25:33.809817  516955 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:25:33.809880  516955 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:25:33.818023  516955 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:25:33.818034  516955 kubeadm.go:158] found existing configuration files:
	
	I1217 20:25:33.818085  516955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 20:25:33.826280  516955 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:25:33.826348  516955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:25:33.834259  516955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 20:25:33.842073  516955 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:25:33.842131  516955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:25:33.849774  516955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 20:25:33.857923  516955 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:25:33.857979  516955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:25:33.866141  516955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 20:25:33.874449  516955 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:25:33.874505  516955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:25:33.883627  516955 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:25:33.923143  516955 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 20:25:33.923468  516955 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:25:34.014573  516955 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:25:34.014638  516955 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 20:25:34.014671  516955 kubeadm.go:319] OS: Linux
	I1217 20:25:34.014717  516955 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:25:34.014780  516955 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 20:25:34.014828  516955 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:25:34.014875  516955 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:25:34.014922  516955 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:25:34.014972  516955 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:25:34.015016  516955 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:25:34.015063  516955 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:25:34.015108  516955 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 20:25:34.085761  516955 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:25:34.085896  516955 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:25:34.086030  516955 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:25:34.096122  516955 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:25:34.099902  516955 out.go:252]   - Generating certificates and keys ...
	I1217 20:25:34.100012  516955 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:25:34.100075  516955 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:25:34.100150  516955 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 20:25:34.100210  516955 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 20:25:34.100278  516955 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 20:25:34.100331  516955 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 20:25:34.100394  516955 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 20:25:34.100468  516955 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 20:25:34.100541  516955 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 20:25:34.100612  516955 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 20:25:34.100649  516955 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 20:25:34.100703  516955 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:25:34.445825  516955 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:25:34.709191  516955 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:25:34.774305  516955 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:25:34.992207  516955 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:25:35.172482  516955 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:25:35.173404  516955 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:25:35.176156  516955 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 20:25:35.179627  516955 out.go:252]   - Booting up control plane ...
	I1217 20:25:35.179740  516955 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:25:35.179823  516955 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:25:35.179893  516955 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:25:35.196495  516955 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:25:35.196807  516955 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:25:35.204316  516955 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:25:35.204581  516955 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:25:35.204766  516955 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:25:35.340076  516955 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:25:35.340205  516955 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:29:35.339429  516955 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001034888s
	I1217 20:29:35.339447  516955 kubeadm.go:319] 
	I1217 20:29:35.339516  516955 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 20:29:35.339548  516955 kubeadm.go:319] 	- The kubelet is not running
	I1217 20:29:35.339699  516955 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 20:29:35.339711  516955 kubeadm.go:319] 
	I1217 20:29:35.339814  516955 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 20:29:35.339846  516955 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 20:29:35.339875  516955 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 20:29:35.339878  516955 kubeadm.go:319] 
	I1217 20:29:35.344847  516955 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 20:29:35.345324  516955 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 20:29:35.345445  516955 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 20:29:35.345683  516955 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 20:29:35.345687  516955 kubeadm.go:319] 
	I1217 20:29:35.345785  516955 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 20:29:35.345841  516955 kubeadm.go:403] duration metric: took 8m6.211677455s to StartCluster
	I1217 20:29:35.345886  516955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:29:35.345950  516955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:29:35.371630  516955 cri.go:89] found id: ""
	I1217 20:29:35.371655  516955 logs.go:282] 0 containers: []
	W1217 20:29:35.371662  516955 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:29:35.371670  516955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:29:35.371728  516955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:29:35.396812  516955 cri.go:89] found id: ""
	I1217 20:29:35.396826  516955 logs.go:282] 0 containers: []
	W1217 20:29:35.396833  516955 logs.go:284] No container was found matching "etcd"
	I1217 20:29:35.396837  516955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:29:35.396893  516955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:29:35.426160  516955 cri.go:89] found id: ""
	I1217 20:29:35.426173  516955 logs.go:282] 0 containers: []
	W1217 20:29:35.426180  516955 logs.go:284] No container was found matching "coredns"
	I1217 20:29:35.426184  516955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:29:35.426239  516955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:29:35.450594  516955 cri.go:89] found id: ""
	I1217 20:29:35.450607  516955 logs.go:282] 0 containers: []
	W1217 20:29:35.450614  516955 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:29:35.450619  516955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:29:35.450674  516955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:29:35.475377  516955 cri.go:89] found id: ""
	I1217 20:29:35.475390  516955 logs.go:282] 0 containers: []
	W1217 20:29:35.475397  516955 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:29:35.475402  516955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:29:35.475460  516955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:29:35.500048  516955 cri.go:89] found id: ""
	I1217 20:29:35.500061  516955 logs.go:282] 0 containers: []
	W1217 20:29:35.500068  516955 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:29:35.500073  516955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:29:35.500131  516955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:29:35.524514  516955 cri.go:89] found id: ""
	I1217 20:29:35.524535  516955 logs.go:282] 0 containers: []
	W1217 20:29:35.524542  516955 logs.go:284] No container was found matching "kindnet"
	I1217 20:29:35.524550  516955 logs.go:123] Gathering logs for dmesg ...
	I1217 20:29:35.524561  516955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:29:35.538870  516955 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:29:35.538885  516955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:29:35.614161  516955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:29:35.605914    4887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:29:35.606683    4887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:29:35.608322    4887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:29:35.608626    4887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:29:35.610097    4887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:29:35.605914    4887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:29:35.606683    4887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:29:35.608322    4887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:29:35.608626    4887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:29:35.610097    4887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:29:35.614175  516955 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:29:35.614187  516955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:29:35.649110  516955 logs.go:123] Gathering logs for container status ...
	I1217 20:29:35.649130  516955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:29:35.683958  516955 logs.go:123] Gathering logs for kubelet ...
	I1217 20:29:35.683973  516955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 20:29:35.750009  516955 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001034888s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 20:29:35.750050  516955 out.go:285] * 
	W1217 20:29:35.750161  516955 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001034888s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 20:29:35.750206  516955 out.go:285] * 
	W1217 20:29:35.752404  516955 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:29:35.757779  516955 out.go:203] 
	W1217 20:29:35.761639  516955 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001034888s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 20:29:35.761685  516955 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 20:29:35.761705  516955 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 20:29:35.765378  516955 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 20:21:27 functional-655452 crio[889]: time="2025-12-17T20:21:27.544828437Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 20:21:27 functional-655452 crio[889]: time="2025-12-17T20:21:27.54486632Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 20:21:27 functional-655452 crio[889]: time="2025-12-17T20:21:27.544910915Z" level=info msg="Create NRI interface"
	Dec 17 20:21:27 functional-655452 crio[889]: time="2025-12-17T20:21:27.545013004Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 20:21:27 functional-655452 crio[889]: time="2025-12-17T20:21:27.545021119Z" level=info msg="runtime interface created"
	Dec 17 20:21:27 functional-655452 crio[889]: time="2025-12-17T20:21:27.545034527Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 20:21:27 functional-655452 crio[889]: time="2025-12-17T20:21:27.54504105Z" level=info msg="runtime interface starting up..."
	Dec 17 20:21:27 functional-655452 crio[889]: time="2025-12-17T20:21:27.545046654Z" level=info msg="starting plugins..."
	Dec 17 20:21:27 functional-655452 crio[889]: time="2025-12-17T20:21:27.54506011Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 20:21:27 functional-655452 crio[889]: time="2025-12-17T20:21:27.545118597Z" level=info msg="No systemd watchdog enabled"
	Dec 17 20:21:27 functional-655452 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 17 20:21:29 functional-655452 crio[889]: time="2025-12-17T20:21:29.456059823Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=a818eee8-38a4-4196-bd64-dd44dc894368 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:21:29 functional-655452 crio[889]: time="2025-12-17T20:21:29.458408364Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=9eb1f5e4-8056-4eb6-bc0c-e0cff46339ef name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:21:29 functional-655452 crio[889]: time="2025-12-17T20:21:29.459042191Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=ad0c1b51-afab-415e-9918-910ed3e817dd name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:21:29 functional-655452 crio[889]: time="2025-12-17T20:21:29.459540755Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=5e54ac55-efda-4330-99a8-c8c49ddd8672 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:21:29 functional-655452 crio[889]: time="2025-12-17T20:21:29.460345184Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=db8178a8-458d-4188-8462-80d56e4eb2e1 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:21:29 functional-655452 crio[889]: time="2025-12-17T20:21:29.460898945Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=6ad1e21d-3693-4394-a4bd-7f5824739f03 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:21:29 functional-655452 crio[889]: time="2025-12-17T20:21:29.46145282Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=f35df1fe-ad1f-45d6-a001-45203d078a1c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:25:34 functional-655452 crio[889]: time="2025-12-17T20:25:34.089634052Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=4d4fa1ea-8bbd-4356-ad90-27d4241cb9d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:25:34 functional-655452 crio[889]: time="2025-12-17T20:25:34.090392443Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=b5ba2963-50b8-49ba-8a37-a36f0eacc46e name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:25:34 functional-655452 crio[889]: time="2025-12-17T20:25:34.091030791Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=8cd391c9-fffc-4286-a4c1-5568fe7c6b36 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:25:34 functional-655452 crio[889]: time="2025-12-17T20:25:34.091556375Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=2d18c555-3ded-4611-8ef2-faf1113522c1 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:25:34 functional-655452 crio[889]: time="2025-12-17T20:25:34.092131322Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=6270f88b-72ef-4813-8361-fd9c42f4f331 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:25:34 functional-655452 crio[889]: time="2025-12-17T20:25:34.092652697Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=54a3a311-d52b-41f7-a322-85b9b38c8f94 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:25:34 functional-655452 crio[889]: time="2025-12-17T20:25:34.093109103Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=811127f3-43d7-4f0e-aa89-4515e30e72c9 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:29:36.714432    5010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:29:36.715252    5010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:29:36.716851    5010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:29:36.717364    5010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:29:36.718944    5010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 19:02] hrtimer: interrupt took 71042327 ns
	[Dec17 20:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 20:12] overlayfs: idmapped layers are currently not supported
	[  +0.080288] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec17 20:17] overlayfs: idmapped layers are currently not supported
	[Dec17 20:18] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:29:36 up  3:12,  0 user,  load average: 0.04, 0.50, 1.24
	Linux functional-655452 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 20:29:34 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:29:34 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 645.
	Dec 17 20:29:34 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:29:34 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:29:34 functional-655452 kubelet[4817]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:29:34 functional-655452 kubelet[4817]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:29:34 functional-655452 kubelet[4817]: E1217 20:29:34.875490    4817 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:29:34 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:29:34 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:29:35 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 646.
	Dec 17 20:29:35 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:29:35 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:29:35 functional-655452 kubelet[4891]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:29:35 functional-655452 kubelet[4891]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:29:35 functional-655452 kubelet[4891]: E1217 20:29:35.635961    4891 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:29:35 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:29:35 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:29:36 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 647.
	Dec 17 20:29:36 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:29:36 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:29:36 functional-655452 kubelet[4930]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:29:36 functional-655452 kubelet[4930]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:29:36 functional-655452 kubelet[4930]: E1217 20:29:36.375572    4930 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:29:36 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:29:36 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452: exit status 6 (330.002378ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 20:29:37.170100  522751 status.go:458] kubeconfig endpoint: get endpoint: "functional-655452" does not appear in /home/jenkins/minikube-integration/21808-485134/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-655452" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (499.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (369.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1217 20:29:37.185592  488412 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-655452 --alsologtostderr -v=8
E1217 20:30:30.852161  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:30:58.557872  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:33:56.661329  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:35:19.729312  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:35:30.852125  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-655452 --alsologtostderr -v=8: exit status 80 (6m6.155901331s)

                                                
                                                
-- stdout --
	* [functional-655452] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-655452" primary control-plane node in "functional-655452" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:29:37.230217  522827 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:29:37.230338  522827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:29:37.230348  522827 out.go:374] Setting ErrFile to fd 2...
	I1217 20:29:37.230354  522827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:29:37.230641  522827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:29:37.231040  522827 out.go:368] Setting JSON to false
	I1217 20:29:37.231956  522827 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11527,"bootTime":1765991851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:29:37.232033  522827 start.go:143] virtualization:  
	I1217 20:29:37.235360  522827 out.go:179] * [functional-655452] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:29:37.239166  522827 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:29:37.239533  522827 notify.go:221] Checking for updates...
	I1217 20:29:37.245507  522827 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:29:37.248369  522827 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:37.251209  522827 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:29:37.254179  522827 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:29:37.257129  522827 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:29:37.260562  522827 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:29:37.260726  522827 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:29:37.289208  522827 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:29:37.289391  522827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:29:37.344995  522827 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 20:29:37.33566048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:29:37.345107  522827 docker.go:319] overlay module found
	I1217 20:29:37.348246  522827 out.go:179] * Using the docker driver based on existing profile
	I1217 20:29:37.351193  522827 start.go:309] selected driver: docker
	I1217 20:29:37.351220  522827 start.go:927] validating driver "docker" against &{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:29:37.351378  522827 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:29:37.351479  522827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:29:37.406404  522827 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 20:29:37.397152083 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:29:37.406839  522827 cni.go:84] Creating CNI manager for ""
	I1217 20:29:37.406903  522827 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:29:37.406958  522827 start.go:353] cluster config:
	{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:29:37.410074  522827 out.go:179] * Starting "functional-655452" primary control-plane node in "functional-655452" cluster
	I1217 20:29:37.413044  522827 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:29:37.415960  522827 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:29:37.418922  522827 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:29:37.418997  522827 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1217 20:29:37.419012  522827 cache.go:65] Caching tarball of preloaded images
	I1217 20:29:37.419028  522827 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:29:37.419099  522827 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:29:37.419110  522827 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 20:29:37.419218  522827 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/config.json ...
	I1217 20:29:37.438883  522827 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:29:37.438908  522827 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:29:37.438929  522827 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:29:37.438964  522827 start.go:360] acquireMachinesLock for functional-655452: {Name:mk8b480106227149e1b79da3c4f580b9dc5e0f96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:29:37.439024  522827 start.go:364] duration metric: took 37.399µs to acquireMachinesLock for "functional-655452"
	I1217 20:29:37.439047  522827 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:29:37.439057  522827 fix.go:54] fixHost starting: 
	I1217 20:29:37.439341  522827 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:29:37.456072  522827 fix.go:112] recreateIfNeeded on functional-655452: state=Running err=<nil>
	W1217 20:29:37.456113  522827 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:29:37.459179  522827 out.go:252] * Updating the running docker "functional-655452" container ...
	I1217 20:29:37.459210  522827 machine.go:94] provisionDockerMachine start ...
	I1217 20:29:37.459290  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:37.476101  522827 main.go:143] libmachine: Using SSH client type: native
	I1217 20:29:37.476449  522827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:29:37.476466  522827 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:29:37.607148  522827 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-655452
	
	I1217 20:29:37.607176  522827 ubuntu.go:182] provisioning hostname "functional-655452"
	I1217 20:29:37.607253  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:37.625523  522827 main.go:143] libmachine: Using SSH client type: native
	I1217 20:29:37.625850  522827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:29:37.625869  522827 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-655452 && echo "functional-655452" | sudo tee /etc/hostname
	I1217 20:29:37.765012  522827 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-655452
	
	I1217 20:29:37.765095  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:37.783574  522827 main.go:143] libmachine: Using SSH client type: native
	I1217 20:29:37.784233  522827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:29:37.784256  522827 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-655452' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-655452/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-655452' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:29:37.923858  522827 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:29:37.923885  522827 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:29:37.923918  522827 ubuntu.go:190] setting up certificates
	I1217 20:29:37.923930  522827 provision.go:84] configureAuth start
	I1217 20:29:37.923995  522827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-655452
	I1217 20:29:37.942198  522827 provision.go:143] copyHostCerts
	I1217 20:29:37.942245  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:29:37.942294  522827 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:29:37.942308  522827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:29:37.942385  522827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:29:37.942483  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:29:37.942506  522827 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:29:37.942510  522827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:29:37.942538  522827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:29:37.942584  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:29:37.942605  522827 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:29:37.942613  522827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:29:37.942638  522827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:29:37.942696  522827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.functional-655452 san=[127.0.0.1 192.168.49.2 functional-655452 localhost minikube]
	I1217 20:29:38.205373  522827 provision.go:177] copyRemoteCerts
	I1217 20:29:38.205444  522827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:29:38.205488  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:38.222940  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:38.324557  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 20:29:38.324643  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:29:38.342369  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 20:29:38.342442  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 20:29:38.361702  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 20:29:38.361816  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:29:38.379229  522827 provision.go:87] duration metric: took 455.281269ms to configureAuth
	I1217 20:29:38.379306  522827 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:29:38.379506  522827 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:29:38.379650  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:38.397098  522827 main.go:143] libmachine: Using SSH client type: native
	I1217 20:29:38.397425  522827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:29:38.397449  522827 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:29:38.710104  522827 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:29:38.710129  522827 machine.go:97] duration metric: took 1.250909554s to provisionDockerMachine
	I1217 20:29:38.710141  522827 start.go:293] postStartSetup for "functional-655452" (driver="docker")
	I1217 20:29:38.710173  522827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:29:38.710243  522827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:29:38.710290  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:38.729105  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:38.823561  522827 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:29:38.826921  522827 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 20:29:38.826944  522827 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 20:29:38.826949  522827 command_runner.go:130] > VERSION_ID="12"
	I1217 20:29:38.826954  522827 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 20:29:38.826958  522827 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 20:29:38.826962  522827 command_runner.go:130] > ID=debian
	I1217 20:29:38.826966  522827 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 20:29:38.826971  522827 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 20:29:38.826976  522827 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 20:29:38.827033  522827 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:29:38.827056  522827 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:29:38.827068  522827 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:29:38.827127  522827 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:29:38.827213  522827 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:29:38.827224  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /etc/ssl/certs/4884122.pem
	I1217 20:29:38.827310  522827 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts -> hosts in /etc/test/nested/copy/488412
	I1217 20:29:38.827318  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts -> /etc/test/nested/copy/488412/hosts
	I1217 20:29:38.827361  522827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/488412
	I1217 20:29:38.835073  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:29:38.853051  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts --> /etc/test/nested/copy/488412/hosts (40 bytes)
	I1217 20:29:38.870277  522827 start.go:296] duration metric: took 160.119138ms for postStartSetup
	I1217 20:29:38.870416  522827 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:29:38.870497  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:38.887313  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:38.980667  522827 command_runner.go:130] > 14%
	I1217 20:29:38.980748  522827 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:29:38.985147  522827 command_runner.go:130] > 169G
	I1217 20:29:38.985687  522827 fix.go:56] duration metric: took 1.546626529s for fixHost
	I1217 20:29:38.985712  522827 start.go:83] releasing machines lock for "functional-655452", held for 1.546675825s
	I1217 20:29:38.985789  522827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-655452
	I1217 20:29:39.004882  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:29:39.004958  522827 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:29:39.004969  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:29:39.005005  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:29:39.005049  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:29:39.005073  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:29:39.005126  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:29:39.005177  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.005197  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.005217  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.005238  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:29:39.005294  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:39.023309  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:39.128919  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:29:39.146238  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:29:39.163663  522827 ssh_runner.go:195] Run: openssl version
	I1217 20:29:39.169395  522827 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 20:29:39.169821  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.177042  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:29:39.184227  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.187671  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.187835  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.187899  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.232645  522827 command_runner.go:130] > 51391683
	I1217 20:29:39.233156  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:29:39.240764  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.248070  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:29:39.256139  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.260468  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.260613  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.260717  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.301324  522827 command_runner.go:130] > 3ec20f2e
	I1217 20:29:39.301774  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:29:39.309564  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.316908  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:29:39.330430  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.334931  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.335647  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.335725  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.377554  522827 command_runner.go:130] > b5213941
	I1217 20:29:39.378955  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:29:39.389619  522827 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:29:39.393257  522827 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 20:29:39.396841  522827 ssh_runner.go:195] Run: cat /version.json
	I1217 20:29:39.396923  522827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:29:39.487006  522827 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1217 20:29:39.489563  522827 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 20:29:39.489734  522827 ssh_runner.go:195] Run: systemctl --version
	I1217 20:29:39.495686  522827 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 20:29:39.495789  522827 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 20:29:39.496199  522827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:29:39.531768  522827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 20:29:39.536045  522827 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 20:29:39.536498  522827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:29:39.536609  522827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:29:39.544584  522827 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:29:39.544609  522827 start.go:496] detecting cgroup driver to use...
	I1217 20:29:39.544639  522827 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:29:39.544686  522827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:29:39.559677  522827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:29:39.572537  522827 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:29:39.572629  522827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:29:39.588063  522827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:29:39.601417  522827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:29:39.711338  522827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:29:39.828534  522827 docker.go:234] disabling docker service ...
	I1217 20:29:39.828602  522827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:29:39.843450  522827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:29:39.856661  522827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:29:39.988443  522827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:29:40.133139  522827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:29:40.147217  522827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:29:40.161697  522827 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1217 20:29:40.163096  522827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:29:40.163182  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.173178  522827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:29:40.173338  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.182803  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.192168  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.201463  522827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:29:40.209602  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.218600  522827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.227088  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.236327  522827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:29:40.243154  522827 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 20:29:40.244193  522827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:29:40.251635  522827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:29:40.361488  522827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:29:40.546740  522827 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:29:40.546847  522827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:29:40.551021  522827 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1217 20:29:40.551089  522827 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 20:29:40.551102  522827 command_runner.go:130] > Device: 0,72	Inode: 1636        Links: 1
	I1217 20:29:40.551127  522827 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 20:29:40.551137  522827 command_runner.go:130] > Access: 2025-12-17 20:29:40.478294705 +0000
	I1217 20:29:40.551143  522827 command_runner.go:130] > Modify: 2025-12-17 20:29:40.478294705 +0000
	I1217 20:29:40.551149  522827 command_runner.go:130] > Change: 2025-12-17 20:29:40.478294705 +0000
	I1217 20:29:40.551152  522827 command_runner.go:130] >  Birth: -
	I1217 20:29:40.551189  522827 start.go:564] Will wait 60s for crictl version
	I1217 20:29:40.551247  522827 ssh_runner.go:195] Run: which crictl
	I1217 20:29:40.554786  522827 command_runner.go:130] > /usr/local/bin/crictl
	I1217 20:29:40.554923  522827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:29:40.577444  522827 command_runner.go:130] > Version:  0.1.0
	I1217 20:29:40.577470  522827 command_runner.go:130] > RuntimeName:  cri-o
	I1217 20:29:40.577476  522827 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1217 20:29:40.577491  522827 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 20:29:40.579694  522827 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:29:40.579819  522827 ssh_runner.go:195] Run: crio --version
	I1217 20:29:40.609324  522827 command_runner.go:130] > crio version 1.34.3
	I1217 20:29:40.609350  522827 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1217 20:29:40.609357  522827 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1217 20:29:40.609362  522827 command_runner.go:130] >    GitTreeState:   dirty
	I1217 20:29:40.609367  522827 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1217 20:29:40.609371  522827 command_runner.go:130] >    GoVersion:      go1.24.6
	I1217 20:29:40.609375  522827 command_runner.go:130] >    Compiler:       gc
	I1217 20:29:40.609382  522827 command_runner.go:130] >    Platform:       linux/arm64
	I1217 20:29:40.609386  522827 command_runner.go:130] >    Linkmode:       static
	I1217 20:29:40.609390  522827 command_runner.go:130] >    BuildTags:
	I1217 20:29:40.609393  522827 command_runner.go:130] >      static
	I1217 20:29:40.609397  522827 command_runner.go:130] >      netgo
	I1217 20:29:40.609401  522827 command_runner.go:130] >      osusergo
	I1217 20:29:40.609410  522827 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1217 20:29:40.609414  522827 command_runner.go:130] >      seccomp
	I1217 20:29:40.609421  522827 command_runner.go:130] >      apparmor
	I1217 20:29:40.609424  522827 command_runner.go:130] >      selinux
	I1217 20:29:40.609429  522827 command_runner.go:130] >    LDFlags:          unknown
	I1217 20:29:40.609433  522827 command_runner.go:130] >    SeccompEnabled:   true
	I1217 20:29:40.609441  522827 command_runner.go:130] >    AppArmorEnabled:  false
	I1217 20:29:40.609527  522827 ssh_runner.go:195] Run: crio --version
	I1217 20:29:40.638467  522827 command_runner.go:130] > crio version 1.34.3
	I1217 20:29:40.638491  522827 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1217 20:29:40.638499  522827 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1217 20:29:40.638505  522827 command_runner.go:130] >    GitTreeState:   dirty
	I1217 20:29:40.638509  522827 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1217 20:29:40.638516  522827 command_runner.go:130] >    GoVersion:      go1.24.6
	I1217 20:29:40.638520  522827 command_runner.go:130] >    Compiler:       gc
	I1217 20:29:40.638533  522827 command_runner.go:130] >    Platform:       linux/arm64
	I1217 20:29:40.638543  522827 command_runner.go:130] >    Linkmode:       static
	I1217 20:29:40.638547  522827 command_runner.go:130] >    BuildTags:
	I1217 20:29:40.638550  522827 command_runner.go:130] >      static
	I1217 20:29:40.638554  522827 command_runner.go:130] >      netgo
	I1217 20:29:40.638558  522827 command_runner.go:130] >      osusergo
	I1217 20:29:40.638568  522827 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1217 20:29:40.638572  522827 command_runner.go:130] >      seccomp
	I1217 20:29:40.638576  522827 command_runner.go:130] >      apparmor
	I1217 20:29:40.638583  522827 command_runner.go:130] >      selinux
	I1217 20:29:40.638587  522827 command_runner.go:130] >    LDFlags:          unknown
	I1217 20:29:40.638592  522827 command_runner.go:130] >    SeccompEnabled:   true
	I1217 20:29:40.638604  522827 command_runner.go:130] >    AppArmorEnabled:  false
	I1217 20:29:40.644077  522827 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 20:29:40.647046  522827 cli_runner.go:164] Run: docker network inspect functional-655452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:29:40.665190  522827 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:29:40.669398  522827 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1217 20:29:40.669593  522827 kubeadm.go:884] updating cluster {Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:29:40.669700  522827 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:29:40.669779  522827 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:29:40.704282  522827 command_runner.go:130] > {
	I1217 20:29:40.704302  522827 command_runner.go:130] >   "images":  [
	I1217 20:29:40.704307  522827 command_runner.go:130] >     {
	I1217 20:29:40.704316  522827 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 20:29:40.704321  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704328  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 20:29:40.704331  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704335  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704350  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1217 20:29:40.704362  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1217 20:29:40.704370  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704374  522827 command_runner.go:130] >       "size":  "111333938",
	I1217 20:29:40.704379  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704389  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704403  522827 command_runner.go:130] >     },
	I1217 20:29:40.704406  522827 command_runner.go:130] >     {
	I1217 20:29:40.704413  522827 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 20:29:40.704419  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704425  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 20:29:40.704429  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704433  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704445  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1217 20:29:40.704454  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 20:29:40.704460  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704464  522827 command_runner.go:130] >       "size":  "29037500",
	I1217 20:29:40.704468  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704476  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704482  522827 command_runner.go:130] >     },
	I1217 20:29:40.704485  522827 command_runner.go:130] >     {
	I1217 20:29:40.704494  522827 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 20:29:40.704503  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704509  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 20:29:40.704512  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704516  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704528  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1217 20:29:40.704536  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1217 20:29:40.704542  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704547  522827 command_runner.go:130] >       "size":  "74491780",
	I1217 20:29:40.704551  522827 command_runner.go:130] >       "username":  "nonroot",
	I1217 20:29:40.704556  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704561  522827 command_runner.go:130] >     },
	I1217 20:29:40.704568  522827 command_runner.go:130] >     {
	I1217 20:29:40.704579  522827 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1217 20:29:40.704583  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704588  522827 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1217 20:29:40.704594  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704598  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704605  522827 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890",
	I1217 20:29:40.704613  522827 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"
	I1217 20:29:40.704619  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704623  522827 command_runner.go:130] >       "size":  "60850387",
	I1217 20:29:40.704626  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.704630  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.704636  522827 command_runner.go:130] >       },
	I1217 20:29:40.704645  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704657  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704660  522827 command_runner.go:130] >     },
	I1217 20:29:40.704664  522827 command_runner.go:130] >     {
	I1217 20:29:40.704673  522827 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1217 20:29:40.704679  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704685  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1217 20:29:40.704689  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704693  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704704  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee",
	I1217 20:29:40.704721  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"
	I1217 20:29:40.704724  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704729  522827 command_runner.go:130] >       "size":  "85015535",
	I1217 20:29:40.704735  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.704739  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.704742  522827 command_runner.go:130] >       },
	I1217 20:29:40.704746  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704753  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704756  522827 command_runner.go:130] >     },
	I1217 20:29:40.704759  522827 command_runner.go:130] >     {
	I1217 20:29:40.704772  522827 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1217 20:29:40.704779  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704785  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1217 20:29:40.704788  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704793  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704803  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f",
	I1217 20:29:40.704813  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1217 20:29:40.704822  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704827  522827 command_runner.go:130] >       "size":  "72170325",
	I1217 20:29:40.704831  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.704835  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.704838  522827 command_runner.go:130] >       },
	I1217 20:29:40.704842  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704846  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704848  522827 command_runner.go:130] >     },
	I1217 20:29:40.704851  522827 command_runner.go:130] >     {
	I1217 20:29:40.704858  522827 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1217 20:29:40.704861  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704866  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1217 20:29:40.704870  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704875  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704883  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f",
	I1217 20:29:40.704894  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1217 20:29:40.704898  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704903  522827 command_runner.go:130] >       "size":  "74107287",
	I1217 20:29:40.704910  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704914  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704926  522827 command_runner.go:130] >     },
	I1217 20:29:40.704930  522827 command_runner.go:130] >     {
	I1217 20:29:40.704936  522827 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1217 20:29:40.704940  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704946  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1217 20:29:40.704949  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704963  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704975  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3",
	I1217 20:29:40.704993  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"
	I1217 20:29:40.705000  522827 command_runner.go:130] >       ],
	I1217 20:29:40.705005  522827 command_runner.go:130] >       "size":  "49822549",
	I1217 20:29:40.705008  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.705014  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.705017  522827 command_runner.go:130] >       },
	I1217 20:29:40.705025  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.705029  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.705033  522827 command_runner.go:130] >     },
	I1217 20:29:40.705036  522827 command_runner.go:130] >     {
	I1217 20:29:40.705043  522827 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 20:29:40.705055  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.705060  522827 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 20:29:40.705063  522827 command_runner.go:130] >       ],
	I1217 20:29:40.705068  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.705078  522827 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1217 20:29:40.705089  522827 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1217 20:29:40.705094  522827 command_runner.go:130] >       ],
	I1217 20:29:40.705097  522827 command_runner.go:130] >       "size":  "519884",
	I1217 20:29:40.705101  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.705108  522827 command_runner.go:130] >         "value":  "65535"
	I1217 20:29:40.705111  522827 command_runner.go:130] >       },
	I1217 20:29:40.705115  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.705119  522827 command_runner.go:130] >       "pinned":  true
	I1217 20:29:40.705128  522827 command_runner.go:130] >     }
	I1217 20:29:40.705133  522827 command_runner.go:130] >   ]
	I1217 20:29:40.705136  522827 command_runner.go:130] > }
	I1217 20:29:40.705310  522827 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:29:40.705323  522827 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:29:40.705384  522827 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:29:40.728606  522827 command_runner.go:130] > {
	I1217 20:29:40.728624  522827 command_runner.go:130] >   "images":  [
	I1217 20:29:40.728629  522827 command_runner.go:130] >     {
	I1217 20:29:40.728638  522827 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 20:29:40.728643  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728657  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 20:29:40.728665  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728669  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728678  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1217 20:29:40.728686  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1217 20:29:40.728689  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728694  522827 command_runner.go:130] >       "size":  "111333938",
	I1217 20:29:40.728698  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.728705  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728708  522827 command_runner.go:130] >     },
	I1217 20:29:40.728711  522827 command_runner.go:130] >     {
	I1217 20:29:40.728718  522827 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 20:29:40.728726  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728731  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 20:29:40.728735  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728739  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728747  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1217 20:29:40.728756  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 20:29:40.728759  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728763  522827 command_runner.go:130] >       "size":  "29037500",
	I1217 20:29:40.728767  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.728774  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728778  522827 command_runner.go:130] >     },
	I1217 20:29:40.728781  522827 command_runner.go:130] >     {
	I1217 20:29:40.728789  522827 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 20:29:40.728793  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728798  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 20:29:40.728801  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728805  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728813  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1217 20:29:40.728821  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1217 20:29:40.728824  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728829  522827 command_runner.go:130] >       "size":  "74491780",
	I1217 20:29:40.728833  522827 command_runner.go:130] >       "username":  "nonroot",
	I1217 20:29:40.728840  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728843  522827 command_runner.go:130] >     },
	I1217 20:29:40.728846  522827 command_runner.go:130] >     {
	I1217 20:29:40.728853  522827 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1217 20:29:40.728857  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728862  522827 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1217 20:29:40.728866  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728870  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728877  522827 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890",
	I1217 20:29:40.728887  522827 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"
	I1217 20:29:40.728890  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728894  522827 command_runner.go:130] >       "size":  "60850387",
	I1217 20:29:40.728898  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.728902  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.728904  522827 command_runner.go:130] >       },
	I1217 20:29:40.728913  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.728917  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728920  522827 command_runner.go:130] >     },
	I1217 20:29:40.728924  522827 command_runner.go:130] >     {
	I1217 20:29:40.728930  522827 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1217 20:29:40.728934  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728939  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1217 20:29:40.728943  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728946  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728954  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee",
	I1217 20:29:40.728962  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"
	I1217 20:29:40.728965  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728969  522827 command_runner.go:130] >       "size":  "85015535",
	I1217 20:29:40.728972  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.728976  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.728979  522827 command_runner.go:130] >       },
	I1217 20:29:40.728983  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.728986  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728996  522827 command_runner.go:130] >     },
	I1217 20:29:40.728999  522827 command_runner.go:130] >     {
	I1217 20:29:40.729006  522827 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1217 20:29:40.729009  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.729015  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1217 20:29:40.729018  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729022  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.729031  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f",
	I1217 20:29:40.729039  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1217 20:29:40.729042  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729046  522827 command_runner.go:130] >       "size":  "72170325",
	I1217 20:29:40.729049  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.729053  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.729056  522827 command_runner.go:130] >       },
	I1217 20:29:40.729060  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.729064  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.729067  522827 command_runner.go:130] >     },
	I1217 20:29:40.729070  522827 command_runner.go:130] >     {
	I1217 20:29:40.729076  522827 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1217 20:29:40.729081  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.729086  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1217 20:29:40.729089  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729093  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.729100  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f",
	I1217 20:29:40.729108  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1217 20:29:40.729111  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729115  522827 command_runner.go:130] >       "size":  "74107287",
	I1217 20:29:40.729119  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.729123  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.729125  522827 command_runner.go:130] >     },
	I1217 20:29:40.729128  522827 command_runner.go:130] >     {
	I1217 20:29:40.729135  522827 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1217 20:29:40.729138  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.729147  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1217 20:29:40.729150  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729154  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.729163  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3",
	I1217 20:29:40.729180  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"
	I1217 20:29:40.729183  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729187  522827 command_runner.go:130] >       "size":  "49822549",
	I1217 20:29:40.729191  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.729195  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.729198  522827 command_runner.go:130] >       },
	I1217 20:29:40.729202  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.729205  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.729208  522827 command_runner.go:130] >     },
	I1217 20:29:40.729212  522827 command_runner.go:130] >     {
	I1217 20:29:40.729218  522827 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 20:29:40.729221  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.729225  522827 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 20:29:40.729228  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729232  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.729239  522827 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1217 20:29:40.729246  522827 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1217 20:29:40.729249  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729253  522827 command_runner.go:130] >       "size":  "519884",
	I1217 20:29:40.729256  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.729260  522827 command_runner.go:130] >         "value":  "65535"
	I1217 20:29:40.729263  522827 command_runner.go:130] >       },
	I1217 20:29:40.729267  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.729271  522827 command_runner.go:130] >       "pinned":  true
	I1217 20:29:40.729274  522827 command_runner.go:130] >     }
	I1217 20:29:40.729276  522827 command_runner.go:130] >   ]
	I1217 20:29:40.729279  522827 command_runner.go:130] > }
	I1217 20:29:40.730532  522827 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:29:40.730563  522827 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:29:40.730572  522827 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1217 20:29:40.730679  522827 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-655452 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:29:40.730767  522827 ssh_runner.go:195] Run: crio config
	I1217 20:29:40.759067  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.758680307Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1217 20:29:40.759091  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.758877363Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1217 20:29:40.759355  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.759160664Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1217 20:29:40.759513  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.75929148Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1217 20:29:40.759764  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.759610703Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.760178  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.759978034Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1217 20:29:40.781892  522827 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1217 20:29:40.789853  522827 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1217 20:29:40.789886  522827 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1217 20:29:40.789894  522827 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1217 20:29:40.789897  522827 command_runner.go:130] > #
	I1217 20:29:40.789905  522827 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1217 20:29:40.789911  522827 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1217 20:29:40.789918  522827 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1217 20:29:40.789927  522827 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1217 20:29:40.789931  522827 command_runner.go:130] > # reload'.
	I1217 20:29:40.789938  522827 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1217 20:29:40.789949  522827 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1217 20:29:40.789959  522827 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1217 20:29:40.789965  522827 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1217 20:29:40.789972  522827 command_runner.go:130] > [crio]
	I1217 20:29:40.789978  522827 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1217 20:29:40.789983  522827 command_runner.go:130] > # containers images, in this directory.
	I1217 20:29:40.789993  522827 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1217 20:29:40.790003  522827 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1217 20:29:40.790008  522827 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1217 20:29:40.790017  522827 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1217 20:29:40.790024  522827 command_runner.go:130] > # imagestore = ""
	I1217 20:29:40.790038  522827 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1217 20:29:40.790048  522827 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1217 20:29:40.790053  522827 command_runner.go:130] > # storage_driver = "overlay"
	I1217 20:29:40.790058  522827 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1217 20:29:40.790065  522827 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1217 20:29:40.790069  522827 command_runner.go:130] > # storage_option = [
	I1217 20:29:40.790073  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790079  522827 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1217 20:29:40.790092  522827 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1217 20:29:40.790100  522827 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1217 20:29:40.790106  522827 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1217 20:29:40.790112  522827 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1217 20:29:40.790119  522827 command_runner.go:130] > # always happen on a node reboot
	I1217 20:29:40.790124  522827 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1217 20:29:40.790139  522827 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1217 20:29:40.790152  522827 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1217 20:29:40.790158  522827 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1217 20:29:40.790162  522827 command_runner.go:130] > # version_file_persist = ""
	I1217 20:29:40.790170  522827 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1217 20:29:40.790180  522827 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1217 20:29:40.790184  522827 command_runner.go:130] > # internal_wipe = true
	I1217 20:29:40.790193  522827 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1217 20:29:40.790202  522827 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1217 20:29:40.790206  522827 command_runner.go:130] > # internal_repair = true
	I1217 20:29:40.790211  522827 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1217 20:29:40.790219  522827 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1217 20:29:40.790226  522827 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1217 20:29:40.790232  522827 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1217 20:29:40.790241  522827 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1217 20:29:40.790251  522827 command_runner.go:130] > [crio.api]
	I1217 20:29:40.790257  522827 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1217 20:29:40.790262  522827 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1217 20:29:40.790271  522827 command_runner.go:130] > # IP address on which the stream server will listen.
	I1217 20:29:40.790278  522827 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1217 20:29:40.790285  522827 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1217 20:29:40.790290  522827 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1217 20:29:40.790297  522827 command_runner.go:130] > # stream_port = "0"
	I1217 20:29:40.790302  522827 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1217 20:29:40.790307  522827 command_runner.go:130] > # stream_enable_tls = false
	I1217 20:29:40.790313  522827 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1217 20:29:40.790320  522827 command_runner.go:130] > # stream_idle_timeout = ""
	I1217 20:29:40.790330  522827 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1217 20:29:40.790339  522827 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1217 20:29:40.790343  522827 command_runner.go:130] > # stream_tls_cert = ""
	I1217 20:29:40.790349  522827 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1217 20:29:40.790357  522827 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1217 20:29:40.790361  522827 command_runner.go:130] > # stream_tls_key = ""
	I1217 20:29:40.790367  522827 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1217 20:29:40.790377  522827 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1217 20:29:40.790382  522827 command_runner.go:130] > # automatically pick up the changes.
	I1217 20:29:40.790385  522827 command_runner.go:130] > # stream_tls_ca = ""
	I1217 20:29:40.790402  522827 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1217 20:29:40.790415  522827 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1217 20:29:40.790423  522827 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1217 20:29:40.790428  522827 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1217 20:29:40.790437  522827 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1217 20:29:40.790443  522827 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1217 20:29:40.790447  522827 command_runner.go:130] > [crio.runtime]
	I1217 20:29:40.790455  522827 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1217 20:29:40.790465  522827 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1217 20:29:40.790470  522827 command_runner.go:130] > # "nofile=1024:2048"
	I1217 20:29:40.790476  522827 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1217 20:29:40.790480  522827 command_runner.go:130] > # default_ulimits = [
	I1217 20:29:40.790486  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790493  522827 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1217 20:29:40.790499  522827 command_runner.go:130] > # no_pivot = false
	I1217 20:29:40.790505  522827 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1217 20:29:40.790511  522827 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1217 20:29:40.790518  522827 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1217 20:29:40.790525  522827 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1217 20:29:40.790530  522827 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1217 20:29:40.790539  522827 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1217 20:29:40.790543  522827 command_runner.go:130] > # conmon = ""
	I1217 20:29:40.790547  522827 command_runner.go:130] > # Cgroup setting for conmon
	I1217 20:29:40.790558  522827 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1217 20:29:40.790563  522827 command_runner.go:130] > conmon_cgroup = "pod"
	I1217 20:29:40.790572  522827 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1217 20:29:40.790585  522827 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1217 20:29:40.790592  522827 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1217 20:29:40.790603  522827 command_runner.go:130] > # conmon_env = [
	I1217 20:29:40.790606  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790611  522827 command_runner.go:130] > # Additional environment variables to set for all the
	I1217 20:29:40.790621  522827 command_runner.go:130] > # containers. These are overridden if set in the
	I1217 20:29:40.790627  522827 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1217 20:29:40.790631  522827 command_runner.go:130] > # default_env = [
	I1217 20:29:40.790634  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790639  522827 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1217 20:29:40.790647  522827 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1217 20:29:40.790653  522827 command_runner.go:130] > # selinux = false
	I1217 20:29:40.790660  522827 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1217 20:29:40.790675  522827 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1217 20:29:40.790682  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.790691  522827 command_runner.go:130] > # seccomp_profile = ""
	I1217 20:29:40.790698  522827 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1217 20:29:40.790703  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.790707  522827 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1217 20:29:40.790717  522827 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1217 20:29:40.790723  522827 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1217 20:29:40.790730  522827 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1217 20:29:40.790738  522827 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1217 20:29:40.790744  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.790751  522827 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1217 20:29:40.790757  522827 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1217 20:29:40.790761  522827 command_runner.go:130] > # the cgroup blockio controller.
	I1217 20:29:40.790765  522827 command_runner.go:130] > # blockio_config_file = ""
	I1217 20:29:40.790774  522827 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1217 20:29:40.790780  522827 command_runner.go:130] > # blockio parameters.
	I1217 20:29:40.790790  522827 command_runner.go:130] > # blockio_reload = false
	I1217 20:29:40.790796  522827 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1217 20:29:40.790800  522827 command_runner.go:130] > # irqbalance daemon.
	I1217 20:29:40.790805  522827 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1217 20:29:40.790814  522827 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1217 20:29:40.790828  522827 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1217 20:29:40.790836  522827 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1217 20:29:40.790845  522827 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1217 20:29:40.790852  522827 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1217 20:29:40.790859  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.790863  522827 command_runner.go:130] > # rdt_config_file = ""
	I1217 20:29:40.790869  522827 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1217 20:29:40.790873  522827 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1217 20:29:40.790881  522827 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1217 20:29:40.790885  522827 command_runner.go:130] > # separate_pull_cgroup = ""
	I1217 20:29:40.790892  522827 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1217 20:29:40.790900  522827 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1217 20:29:40.790904  522827 command_runner.go:130] > # will be added.
	I1217 20:29:40.790908  522827 command_runner.go:130] > # default_capabilities = [
	I1217 20:29:40.790920  522827 command_runner.go:130] > # 	"CHOWN",
	I1217 20:29:40.790924  522827 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1217 20:29:40.790927  522827 command_runner.go:130] > # 	"FSETID",
	I1217 20:29:40.790930  522827 command_runner.go:130] > # 	"FOWNER",
	I1217 20:29:40.790940  522827 command_runner.go:130] > # 	"SETGID",
	I1217 20:29:40.790944  522827 command_runner.go:130] > # 	"SETUID",
	I1217 20:29:40.790963  522827 command_runner.go:130] > # 	"SETPCAP",
	I1217 20:29:40.790971  522827 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1217 20:29:40.790975  522827 command_runner.go:130] > # 	"KILL",
	I1217 20:29:40.790977  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790985  522827 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1217 20:29:40.790992  522827 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1217 20:29:40.790999  522827 command_runner.go:130] > # add_inheritable_capabilities = false
	I1217 20:29:40.791005  522827 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1217 20:29:40.791018  522827 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1217 20:29:40.791023  522827 command_runner.go:130] > default_sysctls = [
	I1217 20:29:40.791030  522827 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1217 20:29:40.791033  522827 command_runner.go:130] > ]
	I1217 20:29:40.791038  522827 command_runner.go:130] > # List of devices on the host that a
	I1217 20:29:40.791044  522827 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1217 20:29:40.791048  522827 command_runner.go:130] > # allowed_devices = [
	I1217 20:29:40.791055  522827 command_runner.go:130] > # 	"/dev/fuse",
	I1217 20:29:40.791059  522827 command_runner.go:130] > # 	"/dev/net/tun",
	I1217 20:29:40.791062  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791067  522827 command_runner.go:130] > # List of additional devices. specified as
	I1217 20:29:40.791081  522827 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1217 20:29:40.791088  522827 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1217 20:29:40.791096  522827 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1217 20:29:40.791103  522827 command_runner.go:130] > # additional_devices = [
	I1217 20:29:40.791110  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791115  522827 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1217 20:29:40.791119  522827 command_runner.go:130] > # cdi_spec_dirs = [
	I1217 20:29:40.791122  522827 command_runner.go:130] > # 	"/etc/cdi",
	I1217 20:29:40.791126  522827 command_runner.go:130] > # 	"/var/run/cdi",
	I1217 20:29:40.791130  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791136  522827 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1217 20:29:40.791144  522827 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1217 20:29:40.791149  522827 command_runner.go:130] > # Defaults to false.
	I1217 20:29:40.791156  522827 command_runner.go:130] > # device_ownership_from_security_context = false
	I1217 20:29:40.791164  522827 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1217 20:29:40.791178  522827 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1217 20:29:40.791181  522827 command_runner.go:130] > # hooks_dir = [
	I1217 20:29:40.791186  522827 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1217 20:29:40.791189  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791195  522827 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1217 20:29:40.791205  522827 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1217 20:29:40.791210  522827 command_runner.go:130] > # its default mounts from the following two files:
	I1217 20:29:40.791220  522827 command_runner.go:130] > #
	I1217 20:29:40.791229  522827 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1217 20:29:40.791240  522827 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1217 20:29:40.791248  522827 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1217 20:29:40.791251  522827 command_runner.go:130] > #
	I1217 20:29:40.791257  522827 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1217 20:29:40.791274  522827 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1217 20:29:40.791280  522827 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1217 20:29:40.791285  522827 command_runner.go:130] > #      only add mounts it finds in this file.
	I1217 20:29:40.791288  522827 command_runner.go:130] > #
	I1217 20:29:40.791292  522827 command_runner.go:130] > # default_mounts_file = ""
	I1217 20:29:40.791301  522827 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1217 20:29:40.791316  522827 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1217 20:29:40.791320  522827 command_runner.go:130] > # pids_limit = -1
	I1217 20:29:40.791326  522827 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1217 20:29:40.791335  522827 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1217 20:29:40.791343  522827 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1217 20:29:40.791354  522827 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1217 20:29:40.791357  522827 command_runner.go:130] > # log_size_max = -1
	I1217 20:29:40.791364  522827 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1217 20:29:40.791368  522827 command_runner.go:130] > # log_to_journald = false
	I1217 20:29:40.791374  522827 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1217 20:29:40.791383  522827 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1217 20:29:40.791391  522827 command_runner.go:130] > # Path to directory for container attach sockets.
	I1217 20:29:40.791396  522827 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1217 20:29:40.791401  522827 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1217 20:29:40.791405  522827 command_runner.go:130] > # bind_mount_prefix = ""
	I1217 20:29:40.791417  522827 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1217 20:29:40.791421  522827 command_runner.go:130] > # read_only = false
	I1217 20:29:40.791427  522827 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1217 20:29:40.791437  522827 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1217 20:29:40.791441  522827 command_runner.go:130] > # live configuration reload.
	I1217 20:29:40.791445  522827 command_runner.go:130] > # log_level = "info"
	I1217 20:29:40.791454  522827 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1217 20:29:40.791460  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.791466  522827 command_runner.go:130] > # log_filter = ""
	I1217 20:29:40.791472  522827 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1217 20:29:40.791481  522827 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1217 20:29:40.791485  522827 command_runner.go:130] > # separated by comma.
	I1217 20:29:40.791493  522827 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 20:29:40.791497  522827 command_runner.go:130] > # uid_mappings = ""
	I1217 20:29:40.791506  522827 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1217 20:29:40.791518  522827 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1217 20:29:40.791523  522827 command_runner.go:130] > # separated by comma.
	I1217 20:29:40.791530  522827 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 20:29:40.791535  522827 command_runner.go:130] > # gid_mappings = ""
	I1217 20:29:40.791540  522827 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1217 20:29:40.791549  522827 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1217 20:29:40.791556  522827 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1217 20:29:40.791565  522827 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 20:29:40.791572  522827 command_runner.go:130] > # minimum_mappable_uid = -1
	I1217 20:29:40.791604  522827 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1217 20:29:40.791611  522827 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1217 20:29:40.791617  522827 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1217 20:29:40.791627  522827 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 20:29:40.791634  522827 command_runner.go:130] > # minimum_mappable_gid = -1
	I1217 20:29:40.791640  522827 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1217 20:29:40.791648  522827 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1217 20:29:40.791662  522827 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1217 20:29:40.791666  522827 command_runner.go:130] > # ctr_stop_timeout = 30
	I1217 20:29:40.791672  522827 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1217 20:29:40.791680  522827 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1217 20:29:40.791685  522827 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1217 20:29:40.791690  522827 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1217 20:29:40.791694  522827 command_runner.go:130] > # drop_infra_ctr = true
	I1217 20:29:40.791700  522827 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1217 20:29:40.791712  522827 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1217 20:29:40.791723  522827 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1217 20:29:40.791727  522827 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1217 20:29:40.791734  522827 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1217 20:29:40.791743  522827 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1217 20:29:40.791749  522827 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1217 20:29:40.791756  522827 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1217 20:29:40.791760  522827 command_runner.go:130] > # shared_cpuset = ""
	I1217 20:29:40.791766  522827 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1217 20:29:40.791773  522827 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1217 20:29:40.791777  522827 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1217 20:29:40.791784  522827 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1217 20:29:40.791795  522827 command_runner.go:130] > # pinns_path = ""
	I1217 20:29:40.791801  522827 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1217 20:29:40.791807  522827 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1217 20:29:40.791814  522827 command_runner.go:130] > # enable_criu_support = true
	I1217 20:29:40.791819  522827 command_runner.go:130] > # Enable/disable the generation of the container,
	I1217 20:29:40.791826  522827 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1217 20:29:40.791833  522827 command_runner.go:130] > # enable_pod_events = false
	I1217 20:29:40.791839  522827 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1217 20:29:40.791845  522827 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1217 20:29:40.791849  522827 command_runner.go:130] > # default_runtime = "crun"
	I1217 20:29:40.791857  522827 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1217 20:29:40.791865  522827 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1217 20:29:40.791874  522827 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1217 20:29:40.791887  522827 command_runner.go:130] > # creation as a file is not desired either.
	I1217 20:29:40.791896  522827 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1217 20:29:40.791903  522827 command_runner.go:130] > # the hostname is being managed dynamically.
	I1217 20:29:40.791910  522827 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1217 20:29:40.791914  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791920  522827 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1217 20:29:40.791929  522827 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1217 20:29:40.791935  522827 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1217 20:29:40.791943  522827 command_runner.go:130] > # Each entry in the table should follow the format:
	I1217 20:29:40.791946  522827 command_runner.go:130] > #
	I1217 20:29:40.791951  522827 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1217 20:29:40.791958  522827 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1217 20:29:40.791964  522827 command_runner.go:130] > # runtime_type = "oci"
	I1217 20:29:40.791969  522827 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1217 20:29:40.791976  522827 command_runner.go:130] > # inherit_default_runtime = false
	I1217 20:29:40.791981  522827 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1217 20:29:40.791986  522827 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1217 20:29:40.791990  522827 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1217 20:29:40.791996  522827 command_runner.go:130] > # monitor_env = []
	I1217 20:29:40.792001  522827 command_runner.go:130] > # privileged_without_host_devices = false
	I1217 20:29:40.792008  522827 command_runner.go:130] > # allowed_annotations = []
	I1217 20:29:40.792014  522827 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1217 20:29:40.792017  522827 command_runner.go:130] > # no_sync_log = false
	I1217 20:29:40.792021  522827 command_runner.go:130] > # default_annotations = {}
	I1217 20:29:40.792028  522827 command_runner.go:130] > # stream_websockets = false
	I1217 20:29:40.792034  522827 command_runner.go:130] > # seccomp_profile = ""
	I1217 20:29:40.792066  522827 command_runner.go:130] > # Where:
	I1217 20:29:40.792076  522827 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1217 20:29:40.792083  522827 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1217 20:29:40.792090  522827 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1217 20:29:40.792098  522827 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1217 20:29:40.792102  522827 command_runner.go:130] > #   in $PATH.
	I1217 20:29:40.792108  522827 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1217 20:29:40.792113  522827 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1217 20:29:40.792122  522827 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1217 20:29:40.792128  522827 command_runner.go:130] > #   state.
	I1217 20:29:40.792134  522827 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1217 20:29:40.792143  522827 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1217 20:29:40.792149  522827 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1217 20:29:40.792155  522827 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1217 20:29:40.792163  522827 command_runner.go:130] > #   the values from the default runtime on load time.
	I1217 20:29:40.792174  522827 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1217 20:29:40.792183  522827 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1217 20:29:40.792190  522827 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1217 20:29:40.792199  522827 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1217 20:29:40.792207  522827 command_runner.go:130] > #   The currently recognized values are:
	I1217 20:29:40.792214  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1217 20:29:40.792222  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1217 20:29:40.792231  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1217 20:29:40.792237  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1217 20:29:40.792251  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1217 20:29:40.792260  522827 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1217 20:29:40.792270  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1217 20:29:40.792277  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1217 20:29:40.792284  522827 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1217 20:29:40.792293  522827 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1217 20:29:40.792309  522827 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1217 20:29:40.792316  522827 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1217 20:29:40.792322  522827 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1217 20:29:40.792331  522827 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1217 20:29:40.792337  522827 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1217 20:29:40.792345  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1217 20:29:40.792353  522827 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1217 20:29:40.792358  522827 command_runner.go:130] > #   deprecated option "conmon".
	I1217 20:29:40.792367  522827 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1217 20:29:40.792380  522827 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1217 20:29:40.792387  522827 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1217 20:29:40.792392  522827 command_runner.go:130] > #   should be moved to the container's cgroup
	I1217 20:29:40.792405  522827 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1217 20:29:40.792410  522827 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1217 20:29:40.792420  522827 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1217 20:29:40.792424  522827 command_runner.go:130] > #   conmon-rs by using:
	I1217 20:29:40.792432  522827 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1217 20:29:40.792441  522827 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1217 20:29:40.792454  522827 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1217 20:29:40.792465  522827 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1217 20:29:40.792471  522827 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1217 20:29:40.792485  522827 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1217 20:29:40.792497  522827 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1217 20:29:40.792506  522827 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1217 20:29:40.792515  522827 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1217 20:29:40.792524  522827 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1217 20:29:40.792529  522827 command_runner.go:130] > #   when a machine crash happens.
	I1217 20:29:40.792536  522827 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1217 20:29:40.792546  522827 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1217 20:29:40.792558  522827 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1217 20:29:40.792562  522827 command_runner.go:130] > #   seccomp profile for the runtime.
	I1217 20:29:40.792568  522827 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1217 20:29:40.792579  522827 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1217 20:29:40.792582  522827 command_runner.go:130] > #
	I1217 20:29:40.792587  522827 command_runner.go:130] > # Using the seccomp notifier feature:
	I1217 20:29:40.792590  522827 command_runner.go:130] > #
	I1217 20:29:40.792596  522827 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1217 20:29:40.792605  522827 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1217 20:29:40.792608  522827 command_runner.go:130] > #
	I1217 20:29:40.792615  522827 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1217 20:29:40.792630  522827 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1217 20:29:40.792633  522827 command_runner.go:130] > #
	I1217 20:29:40.792642  522827 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1217 20:29:40.792649  522827 command_runner.go:130] > # feature.
	I1217 20:29:40.792652  522827 command_runner.go:130] > #
	I1217 20:29:40.792658  522827 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1217 20:29:40.792667  522827 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1217 20:29:40.792673  522827 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1217 20:29:40.792679  522827 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1217 20:29:40.792688  522827 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1217 20:29:40.792692  522827 command_runner.go:130] > #
	I1217 20:29:40.792702  522827 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1217 20:29:40.792711  522827 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1217 20:29:40.792715  522827 command_runner.go:130] > #
	I1217 20:29:40.792721  522827 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1217 20:29:40.792727  522827 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1217 20:29:40.792732  522827 command_runner.go:130] > #
	I1217 20:29:40.792738  522827 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1217 20:29:40.792744  522827 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1217 20:29:40.792750  522827 command_runner.go:130] > # limitation.
	I1217 20:29:40.792754  522827 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1217 20:29:40.792758  522827 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1217 20:29:40.792761  522827 command_runner.go:130] > runtime_type = ""
	I1217 20:29:40.792765  522827 command_runner.go:130] > runtime_root = "/run/crun"
	I1217 20:29:40.792769  522827 command_runner.go:130] > inherit_default_runtime = false
	I1217 20:29:40.792774  522827 command_runner.go:130] > runtime_config_path = ""
	I1217 20:29:40.792781  522827 command_runner.go:130] > container_min_memory = ""
	I1217 20:29:40.792785  522827 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1217 20:29:40.792796  522827 command_runner.go:130] > monitor_cgroup = "pod"
	I1217 20:29:40.792801  522827 command_runner.go:130] > monitor_exec_cgroup = ""
	I1217 20:29:40.792804  522827 command_runner.go:130] > allowed_annotations = [
	I1217 20:29:40.792809  522827 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1217 20:29:40.792814  522827 command_runner.go:130] > ]
	I1217 20:29:40.792819  522827 command_runner.go:130] > privileged_without_host_devices = false
	I1217 20:29:40.792823  522827 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1217 20:29:40.792828  522827 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1217 20:29:40.792834  522827 command_runner.go:130] > runtime_type = ""
	I1217 20:29:40.792839  522827 command_runner.go:130] > runtime_root = "/run/runc"
	I1217 20:29:40.792842  522827 command_runner.go:130] > inherit_default_runtime = false
	I1217 20:29:40.792846  522827 command_runner.go:130] > runtime_config_path = ""
	I1217 20:29:40.792850  522827 command_runner.go:130] > container_min_memory = ""
	I1217 20:29:40.792856  522827 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1217 20:29:40.792860  522827 command_runner.go:130] > monitor_cgroup = "pod"
	I1217 20:29:40.792864  522827 command_runner.go:130] > monitor_exec_cgroup = ""
	I1217 20:29:40.792875  522827 command_runner.go:130] > privileged_without_host_devices = false
	I1217 20:29:40.792884  522827 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1217 20:29:40.792890  522827 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1217 20:29:40.792896  522827 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1217 20:29:40.792907  522827 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1217 20:29:40.792918  522827 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1217 20:29:40.792930  522827 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1217 20:29:40.792940  522827 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1217 20:29:40.792947  522827 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1217 20:29:40.792958  522827 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1217 20:29:40.792975  522827 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1217 20:29:40.792980  522827 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1217 20:29:40.792998  522827 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1217 20:29:40.793004  522827 command_runner.go:130] > # Example:
	I1217 20:29:40.793009  522827 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1217 20:29:40.793014  522827 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1217 20:29:40.793019  522827 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1217 20:29:40.793025  522827 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1217 20:29:40.793029  522827 command_runner.go:130] > # cpuset = "0-1"
	I1217 20:29:40.793033  522827 command_runner.go:130] > # cpushares = "5"
	I1217 20:29:40.793039  522827 command_runner.go:130] > # cpuquota = "1000"
	I1217 20:29:40.793043  522827 command_runner.go:130] > # cpuperiod = "100000"
	I1217 20:29:40.793050  522827 command_runner.go:130] > # cpulimit = "35"
	I1217 20:29:40.793059  522827 command_runner.go:130] > # Where:
	I1217 20:29:40.793066  522827 command_runner.go:130] > # The workload name is workload-type.
	I1217 20:29:40.793073  522827 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1217 20:29:40.793079  522827 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1217 20:29:40.793087  522827 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1217 20:29:40.793096  522827 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1217 20:29:40.793101  522827 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1217 20:29:40.793106  522827 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1217 20:29:40.793116  522827 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1217 20:29:40.793122  522827 command_runner.go:130] > # Default value is set to true
	I1217 20:29:40.793132  522827 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1217 20:29:40.793141  522827 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1217 20:29:40.793146  522827 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1217 20:29:40.793150  522827 command_runner.go:130] > # Default value is set to 'false'
	I1217 20:29:40.793155  522827 command_runner.go:130] > # disable_hostport_mapping = false
	I1217 20:29:40.793163  522827 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1217 20:29:40.793172  522827 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1217 20:29:40.793175  522827 command_runner.go:130] > # timezone = ""
	I1217 20:29:40.793185  522827 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1217 20:29:40.793188  522827 command_runner.go:130] > #
	I1217 20:29:40.793194  522827 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1217 20:29:40.793212  522827 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1217 20:29:40.793215  522827 command_runner.go:130] > [crio.image]
	I1217 20:29:40.793222  522827 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1217 20:29:40.793229  522827 command_runner.go:130] > # default_transport = "docker://"
	I1217 20:29:40.793236  522827 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1217 20:29:40.793243  522827 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1217 20:29:40.793249  522827 command_runner.go:130] > # global_auth_file = ""
	I1217 20:29:40.793255  522827 command_runner.go:130] > # The image used to instantiate infra containers.
	I1217 20:29:40.793260  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.793264  522827 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1217 20:29:40.793271  522827 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1217 20:29:40.793277  522827 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1217 20:29:40.793283  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.793289  522827 command_runner.go:130] > # pause_image_auth_file = ""
	I1217 20:29:40.793295  522827 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1217 20:29:40.793304  522827 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1217 20:29:40.793311  522827 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1217 20:29:40.793317  522827 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1217 20:29:40.793323  522827 command_runner.go:130] > # pause_command = "/pause"
	I1217 20:29:40.793329  522827 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1217 20:29:40.793335  522827 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1217 20:29:40.793342  522827 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1217 20:29:40.793351  522827 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1217 20:29:40.793357  522827 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1217 20:29:40.793372  522827 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1217 20:29:40.793376  522827 command_runner.go:130] > # pinned_images = [
	I1217 20:29:40.793379  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793388  522827 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1217 20:29:40.793401  522827 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1217 20:29:40.793408  522827 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1217 20:29:40.793416  522827 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1217 20:29:40.793422  522827 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1217 20:29:40.793426  522827 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1217 20:29:40.793432  522827 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1217 20:29:40.793439  522827 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1217 20:29:40.793445  522827 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1217 20:29:40.793456  522827 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1217 20:29:40.793462  522827 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1217 20:29:40.793467  522827 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1217 20:29:40.793473  522827 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1217 20:29:40.793479  522827 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1217 20:29:40.793483  522827 command_runner.go:130] > # changing them here.
	I1217 20:29:40.793488  522827 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1217 20:29:40.793492  522827 command_runner.go:130] > # insecure_registries = [
	I1217 20:29:40.793495  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793514  522827 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1217 20:29:40.793522  522827 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1217 20:29:40.793526  522827 command_runner.go:130] > # image_volumes = "mkdir"
	I1217 20:29:40.793532  522827 command_runner.go:130] > # Temporary directory to use for storing big files
	I1217 20:29:40.793538  522827 command_runner.go:130] > # big_files_temporary_dir = ""
	I1217 20:29:40.793544  522827 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1217 20:29:40.793554  522827 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1217 20:29:40.793558  522827 command_runner.go:130] > # auto_reload_registries = false
	I1217 20:29:40.793564  522827 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1217 20:29:40.793572  522827 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1217 20:29:40.793584  522827 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1217 20:29:40.793589  522827 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1217 20:29:40.793594  522827 command_runner.go:130] > # The mode of short name resolution.
	I1217 20:29:40.793600  522827 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1217 20:29:40.793607  522827 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1217 20:29:40.793613  522827 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1217 20:29:40.793624  522827 command_runner.go:130] > # short_name_mode = "enforcing"
	I1217 20:29:40.793631  522827 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1217 20:29:40.793636  522827 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1217 20:29:40.793643  522827 command_runner.go:130] > # oci_artifact_mount_support = true
	I1217 20:29:40.793649  522827 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1217 20:29:40.793653  522827 command_runner.go:130] > # CNI plugins.
	I1217 20:29:40.793662  522827 command_runner.go:130] > [crio.network]
	I1217 20:29:40.793669  522827 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1217 20:29:40.793674  522827 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1217 20:29:40.793678  522827 command_runner.go:130] > # cni_default_network = ""
	I1217 20:29:40.793683  522827 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1217 20:29:40.793688  522827 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1217 20:29:40.793695  522827 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1217 20:29:40.793701  522827 command_runner.go:130] > # plugin_dirs = [
	I1217 20:29:40.793705  522827 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1217 20:29:40.793708  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793712  522827 command_runner.go:130] > # List of included pod metrics.
	I1217 20:29:40.793716  522827 command_runner.go:130] > # included_pod_metrics = [
	I1217 20:29:40.793721  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793727  522827 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1217 20:29:40.793733  522827 command_runner.go:130] > [crio.metrics]
	I1217 20:29:40.793738  522827 command_runner.go:130] > # Globally enable or disable metrics support.
	I1217 20:29:40.793742  522827 command_runner.go:130] > # enable_metrics = false
	I1217 20:29:40.793749  522827 command_runner.go:130] > # Specify enabled metrics collectors.
	I1217 20:29:40.793754  522827 command_runner.go:130] > # Per default all metrics are enabled.
	I1217 20:29:40.793760  522827 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1217 20:29:40.793769  522827 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1217 20:29:40.793781  522827 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1217 20:29:40.793788  522827 command_runner.go:130] > # metrics_collectors = [
	I1217 20:29:40.793792  522827 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1217 20:29:40.793796  522827 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1217 20:29:40.793801  522827 command_runner.go:130] > # 	"containers_oom_total",
	I1217 20:29:40.793810  522827 command_runner.go:130] > # 	"processes_defunct",
	I1217 20:29:40.793814  522827 command_runner.go:130] > # 	"operations_total",
	I1217 20:29:40.793818  522827 command_runner.go:130] > # 	"operations_latency_seconds",
	I1217 20:29:40.793825  522827 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1217 20:29:40.793830  522827 command_runner.go:130] > # 	"operations_errors_total",
	I1217 20:29:40.793834  522827 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1217 20:29:40.793838  522827 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1217 20:29:40.793843  522827 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1217 20:29:40.793847  522827 command_runner.go:130] > # 	"image_pulls_success_total",
	I1217 20:29:40.793851  522827 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1217 20:29:40.793857  522827 command_runner.go:130] > # 	"containers_oom_count_total",
	I1217 20:29:40.793862  522827 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1217 20:29:40.793869  522827 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1217 20:29:40.793873  522827 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1217 20:29:40.793876  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793882  522827 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1217 20:29:40.793888  522827 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1217 20:29:40.793894  522827 command_runner.go:130] > # The port on which the metrics server will listen.
	I1217 20:29:40.793898  522827 command_runner.go:130] > # metrics_port = 9090
	I1217 20:29:40.793905  522827 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1217 20:29:40.793909  522827 command_runner.go:130] > # metrics_socket = ""
	I1217 20:29:40.793920  522827 command_runner.go:130] > # The certificate for the secure metrics server.
	I1217 20:29:40.793926  522827 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1217 20:29:40.793932  522827 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1217 20:29:40.793939  522827 command_runner.go:130] > # certificate on any modification event.
	I1217 20:29:40.793942  522827 command_runner.go:130] > # metrics_cert = ""
	I1217 20:29:40.793947  522827 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1217 20:29:40.793959  522827 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1217 20:29:40.793967  522827 command_runner.go:130] > # metrics_key = ""
	I1217 20:29:40.793980  522827 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1217 20:29:40.793983  522827 command_runner.go:130] > [crio.tracing]
	I1217 20:29:40.793989  522827 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1217 20:29:40.793996  522827 command_runner.go:130] > # enable_tracing = false
	I1217 20:29:40.794002  522827 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1217 20:29:40.794006  522827 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1217 20:29:40.794015  522827 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1217 20:29:40.794020  522827 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1217 20:29:40.794024  522827 command_runner.go:130] > # CRI-O NRI configuration.
	I1217 20:29:40.794027  522827 command_runner.go:130] > [crio.nri]
	I1217 20:29:40.794031  522827 command_runner.go:130] > # Globally enable or disable NRI.
	I1217 20:29:40.794035  522827 command_runner.go:130] > # enable_nri = true
	I1217 20:29:40.794039  522827 command_runner.go:130] > # NRI socket to listen on.
	I1217 20:29:40.794045  522827 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1217 20:29:40.794050  522827 command_runner.go:130] > # NRI plugin directory to use.
	I1217 20:29:40.794061  522827 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1217 20:29:40.794066  522827 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1217 20:29:40.794073  522827 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1217 20:29:40.794082  522827 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1217 20:29:40.794150  522827 command_runner.go:130] > # nri_disable_connections = false
	I1217 20:29:40.794172  522827 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1217 20:29:40.794178  522827 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1217 20:29:40.794186  522827 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1217 20:29:40.794191  522827 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1217 20:29:40.794200  522827 command_runner.go:130] > # NRI default validator configuration.
	I1217 20:29:40.794211  522827 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1217 20:29:40.794218  522827 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1217 20:29:40.794225  522827 command_runner.go:130] > # can be restricted/rejected:
	I1217 20:29:40.794229  522827 command_runner.go:130] > # - OCI hook injection
	I1217 20:29:40.794235  522827 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1217 20:29:40.794240  522827 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1217 20:29:40.794245  522827 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1217 20:29:40.794252  522827 command_runner.go:130] > # - adjustment of linux namespaces
	I1217 20:29:40.794263  522827 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1217 20:29:40.794277  522827 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1217 20:29:40.794284  522827 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1217 20:29:40.794295  522827 command_runner.go:130] > #
	I1217 20:29:40.794299  522827 command_runner.go:130] > # [crio.nri.default_validator]
	I1217 20:29:40.794304  522827 command_runner.go:130] > # nri_enable_default_validator = false
	I1217 20:29:40.794312  522827 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1217 20:29:40.794318  522827 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1217 20:29:40.794326  522827 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1217 20:29:40.794338  522827 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1217 20:29:40.794343  522827 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1217 20:29:40.794347  522827 command_runner.go:130] > # nri_validator_required_plugins = [
	I1217 20:29:40.794352  522827 command_runner.go:130] > # ]
	I1217 20:29:40.794359  522827 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1217 20:29:40.794368  522827 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1217 20:29:40.794373  522827 command_runner.go:130] > [crio.stats]
	I1217 20:29:40.794386  522827 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1217 20:29:40.794392  522827 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1217 20:29:40.794398  522827 command_runner.go:130] > # stats_collection_period = 0
	I1217 20:29:40.794405  522827 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1217 20:29:40.794411  522827 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1217 20:29:40.794417  522827 command_runner.go:130] > # collection_period = 0
	I1217 20:29:40.794552  522827 cni.go:84] Creating CNI manager for ""
	I1217 20:29:40.794571  522827 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:29:40.794583  522827 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:29:40.794609  522827 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-655452 NodeName:functional-655452 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:29:40.794745  522827 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-655452"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:29:40.794827  522827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 20:29:40.802768  522827 command_runner.go:130] > kubeadm
	I1217 20:29:40.802789  522827 command_runner.go:130] > kubectl
	I1217 20:29:40.802794  522827 command_runner.go:130] > kubelet
	I1217 20:29:40.802809  522827 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:29:40.802895  522827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:29:40.810641  522827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 20:29:40.826893  522827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 20:29:40.841576  522827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1217 20:29:40.856014  522827 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:29:40.859640  522827 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 20:29:40.860204  522827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:29:40.970449  522827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:29:41.821239  522827 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452 for IP: 192.168.49.2
	I1217 20:29:41.821266  522827 certs.go:195] generating shared ca certs ...
	I1217 20:29:41.821284  522827 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:29:41.821441  522827 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:29:41.821492  522827 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:29:41.821509  522827 certs.go:257] generating profile certs ...
	I1217 20:29:41.821619  522827 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key
	I1217 20:29:41.821682  522827 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key.aa95dda5
	I1217 20:29:41.821733  522827 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key
	I1217 20:29:41.821747  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 20:29:41.821765  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 20:29:41.821780  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 20:29:41.821791  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 20:29:41.821805  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 20:29:41.821817  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 20:29:41.821831  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 20:29:41.821846  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 20:29:41.821894  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:29:41.821945  522827 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:29:41.821959  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:29:41.821996  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:29:41.822031  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:29:41.822058  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:29:41.822104  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:29:41.822138  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:41.822159  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:29:41.822175  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:29:41.822802  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:29:41.845035  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:29:41.868336  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:29:41.901049  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:29:41.918871  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 20:29:41.937168  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 20:29:41.954450  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:29:41.971684  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:29:41.988884  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:29:42.008645  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:29:42.029398  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:29:42.047332  522827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:29:42.061588  522827 ssh_runner.go:195] Run: openssl version
	I1217 20:29:42.068928  522827 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 20:29:42.069476  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.078814  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:29:42.088990  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.093920  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.093987  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.094097  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.137804  522827 command_runner.go:130] > 51391683
	I1217 20:29:42.138358  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:29:42.147537  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.157061  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:29:42.166751  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.171759  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.171865  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.172010  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.222515  522827 command_runner.go:130] > 3ec20f2e
	I1217 20:29:42.222600  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:29:42.231935  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.242232  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:29:42.250913  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.255543  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.255609  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.255686  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.298361  522827 command_runner.go:130] > b5213941
	I1217 20:29:42.298457  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:29:42.307141  522827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:29:42.311232  522827 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:29:42.311338  522827 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 20:29:42.311364  522827 command_runner.go:130] > Device: 259,1	Inode: 1313050     Links: 1
	I1217 20:29:42.311390  522827 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 20:29:42.311425  522827 command_runner.go:130] > Access: 2025-12-17 20:25:34.088053460 +0000
	I1217 20:29:42.311446  522827 command_runner.go:130] > Modify: 2025-12-17 20:21:29.777917427 +0000
	I1217 20:29:42.311461  522827 command_runner.go:130] > Change: 2025-12-17 20:21:29.777917427 +0000
	I1217 20:29:42.311467  522827 command_runner.go:130] >  Birth: 2025-12-17 20:21:29.777917427 +0000
	I1217 20:29:42.311555  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:29:42.352885  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.353302  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:29:42.407045  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.407143  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:29:42.455863  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.456326  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:29:42.505636  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.506227  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:29:42.548331  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.548862  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:29:42.590705  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.591277  522827 kubeadm.go:401] StartCluster: {Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:29:42.591354  522827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:29:42.591425  522827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:29:42.618986  522827 cri.go:89] found id: ""
	I1217 20:29:42.619059  522827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:29:42.626323  522827 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 20:29:42.626347  522827 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 20:29:42.626355  522827 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 20:29:42.627403  522827 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 20:29:42.627425  522827 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 20:29:42.627476  522827 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 20:29:42.635033  522827 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:29:42.635439  522827 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-655452" does not appear in /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:42.635552  522827 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-485134/kubeconfig needs updating (will repair): [kubeconfig missing "functional-655452" cluster setting kubeconfig missing "functional-655452" context setting]
	I1217 20:29:42.635844  522827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/kubeconfig: {Name:mkc7214551fa855d6d4575d627c3ecd54291b351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:29:42.636278  522827 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:42.636437  522827 kapi.go:59] client config for functional-655452: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:29:42.636955  522827 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 20:29:42.636974  522827 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 20:29:42.636979  522827 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 20:29:42.636984  522827 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 20:29:42.636988  522827 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 20:29:42.637054  522827 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 20:29:42.637345  522827 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 20:29:42.646583  522827 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 20:29:42.646685  522827 kubeadm.go:602] duration metric: took 19.253149ms to restartPrimaryControlPlane
	I1217 20:29:42.646744  522827 kubeadm.go:403] duration metric: took 55.459532ms to StartCluster
	I1217 20:29:42.646789  522827 settings.go:142] acquiring lock: {Name:mk91fac3a58d91b836cd0701183ef7cc1e571672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:29:42.646894  522827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:42.647795  522827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/kubeconfig: {Name:mkc7214551fa855d6d4575d627c3ecd54291b351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:29:42.648137  522827 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:29:42.648371  522827 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:29:42.648423  522827 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:29:42.648485  522827 addons.go:70] Setting storage-provisioner=true in profile "functional-655452"
	I1217 20:29:42.648497  522827 addons.go:239] Setting addon storage-provisioner=true in "functional-655452"
	I1217 20:29:42.648521  522827 host.go:66] Checking if "functional-655452" exists ...
	I1217 20:29:42.648902  522827 addons.go:70] Setting default-storageclass=true in profile "functional-655452"
	I1217 20:29:42.648999  522827 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-655452"
	I1217 20:29:42.649042  522827 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:29:42.649424  522827 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:29:42.653921  522827 out.go:179] * Verifying Kubernetes components...
	I1217 20:29:42.656821  522827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:29:42.689834  522827 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:42.690004  522827 kapi.go:59] client config for functional-655452: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:29:42.690276  522827 addons.go:239] Setting addon default-storageclass=true in "functional-655452"
	I1217 20:29:42.690305  522827 host.go:66] Checking if "functional-655452" exists ...
	I1217 20:29:42.690860  522827 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:29:42.692598  522827 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 20:29:42.699772  522827 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:42.699803  522827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 20:29:42.699871  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:42.735975  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:42.743517  522827 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:42.743543  522827 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:29:42.743664  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:42.778325  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:42.848025  522827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:29:42.860324  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:42.899199  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:43.321927  522827 node_ready.go:35] waiting up to 6m0s for node "functional-655452" to be "Ready" ...
	I1217 20:29:43.322118  522827 type.go:168] "Request Body" body=""
	I1217 20:29:43.322203  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:43.322465  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.322528  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.322567  522827 retry.go:31] will retry after 172.422642ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.322648  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.322689  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.322715  522827 retry.go:31] will retry after 167.097093ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.322809  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:43.490380  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:43.496229  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:43.581353  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.581433  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.581460  522827 retry.go:31] will retry after 331.036154ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.581553  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.581605  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.581639  522827 retry.go:31] will retry after 400.38477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.822877  522827 type.go:168] "Request Body" body=""
	I1217 20:29:43.822949  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:43.823300  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:43.912722  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:43.970874  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.974629  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.974708  522827 retry.go:31] will retry after 462.319516ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.982922  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:44.044566  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:44.048683  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.048723  522827 retry.go:31] will retry after 443.115947ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.323122  522827 type.go:168] "Request Body" body=""
	I1217 20:29:44.323200  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:44.323555  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:44.437879  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:44.492501  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:44.499443  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:44.499482  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.499520  522827 retry.go:31] will retry after 1.265386144s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.551004  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:44.551045  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.551085  522827 retry.go:31] will retry after 774.139673ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.822655  522827 type.go:168] "Request Body" body=""
	I1217 20:29:44.822811  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:44.823195  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:45.323027  522827 type.go:168] "Request Body" body=""
	I1217 20:29:45.323135  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:45.323507  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:45.323621  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:45.325715  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:45.391952  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:45.395668  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:45.395750  522827 retry.go:31] will retry after 1.529541916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:45.765134  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:45.822845  522827 type.go:168] "Request Body" body=""
	I1217 20:29:45.822973  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:45.823280  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:45.823537  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:45.827173  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:45.827206  522827 retry.go:31] will retry after 637.037829ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:46.322836  522827 type.go:168] "Request Body" body=""
	I1217 20:29:46.322927  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:46.323203  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:46.464492  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:46.525009  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:46.525062  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:46.525083  522827 retry.go:31] will retry after 1.110973738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:46.822245  522827 type.go:168] "Request Body" body=""
	I1217 20:29:46.822323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:46.822662  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:46.926099  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:46.987960  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:46.988006  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:46.988028  522827 retry.go:31] will retry after 1.385710629s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:47.322640  522827 type.go:168] "Request Body" body=""
	I1217 20:29:47.322715  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:47.323041  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:47.636709  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:47.697205  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:47.697243  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:47.697264  522827 retry.go:31] will retry after 4.090194732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:47.822497  522827 type.go:168] "Request Body" body=""
	I1217 20:29:47.822589  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:47.822932  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:47.822989  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:48.322659  522827 type.go:168] "Request Body" body=""
	I1217 20:29:48.322736  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:48.323019  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:48.374352  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:48.431979  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:48.435409  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:48.435442  522827 retry.go:31] will retry after 3.099398493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:48.823142  522827 type.go:168] "Request Body" body=""
	I1217 20:29:48.823220  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:48.823522  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:49.322226  522827 type.go:168] "Request Body" body=""
	I1217 20:29:49.322316  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:49.322645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:49.822280  522827 type.go:168] "Request Body" body=""
	I1217 20:29:49.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:49.822709  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:50.322247  522827 type.go:168] "Request Body" body=""
	I1217 20:29:50.322328  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:50.322668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:50.322721  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:50.822373  522827 type.go:168] "Request Body" body=""
	I1217 20:29:50.822449  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:50.822719  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:51.322273  522827 type.go:168] "Request Body" body=""
	I1217 20:29:51.322361  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:51.322682  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:51.535119  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:51.608419  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:51.608461  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:51.608504  522827 retry.go:31] will retry after 5.948755722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:51.787984  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:51.822458  522827 type.go:168] "Request Body" body=""
	I1217 20:29:51.822538  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:51.822817  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:51.846041  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:51.846085  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:51.846105  522827 retry.go:31] will retry after 5.856724643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:52.322893  522827 type.go:168] "Request Body" body=""
	I1217 20:29:52.322982  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:52.323271  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:52.323320  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:52.822254  522827 type.go:168] "Request Body" body=""
	I1217 20:29:52.822329  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:52.822653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:53.322391  522827 type.go:168] "Request Body" body=""
	I1217 20:29:53.322479  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:53.322825  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:53.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:29:53.822273  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:53.822595  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:54.322265  522827 type.go:168] "Request Body" body=""
	I1217 20:29:54.322351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:54.322683  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:54.822243  522827 type.go:168] "Request Body" body=""
	I1217 20:29:54.822322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:54.822646  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:54.822705  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:55.322383  522827 type.go:168] "Request Body" body=""
	I1217 20:29:55.322466  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:55.322739  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:55.822262  522827 type.go:168] "Request Body" body=""
	I1217 20:29:55.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:55.822680  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:56.322404  522827 type.go:168] "Request Body" body=""
	I1217 20:29:56.322493  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:56.322874  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:56.822564  522827 type.go:168] "Request Body" body=""
	I1217 20:29:56.822678  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:56.823046  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:56.823109  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:57.322771  522827 type.go:168] "Request Body" body=""
	I1217 20:29:57.322846  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:57.323141  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:57.557506  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:57.638482  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:57.642516  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:57.642548  522827 retry.go:31] will retry after 4.405911356s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:57.703796  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:57.764881  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:57.764928  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:57.764950  522827 retry.go:31] will retry after 7.580168113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:57.823128  522827 type.go:168] "Request Body" body=""
	I1217 20:29:57.823235  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:57.823556  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:58.322216  522827 type.go:168] "Request Body" body=""
	I1217 20:29:58.322291  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:58.322579  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:58.822288  522827 type.go:168] "Request Body" body=""
	I1217 20:29:58.822434  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:58.822838  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:59.322555  522827 type.go:168] "Request Body" body=""
	I1217 20:29:59.322632  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:59.322948  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:59.323004  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:59.822770  522827 type.go:168] "Request Body" body=""
	I1217 20:29:59.822844  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:59.823119  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:00.323032  522827 type.go:168] "Request Body" body=""
	I1217 20:30:00.323116  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:00.323489  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:00.822257  522827 type.go:168] "Request Body" body=""
	I1217 20:30:00.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:00.822678  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:01.322375  522827 type.go:168] "Request Body" body=""
	I1217 20:30:01.322459  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:01.322808  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:01.822288  522827 type.go:168] "Request Body" body=""
	I1217 20:30:01.822382  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:01.822690  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:01.822741  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:02.049201  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:30:02.136097  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:02.136138  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:02.136156  522827 retry.go:31] will retry after 5.567678678s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:02.322750  522827 type.go:168] "Request Body" body=""
	I1217 20:30:02.322843  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:02.323173  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:02.822939  522827 type.go:168] "Request Body" body=""
	I1217 20:30:02.823008  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:02.823350  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:03.323175  522827 type.go:168] "Request Body" body=""
	I1217 20:30:03.323258  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:03.323612  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:03.822172  522827 type.go:168] "Request Body" body=""
	I1217 20:30:03.822257  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:03.822603  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:04.322314  522827 type.go:168] "Request Body" body=""
	I1217 20:30:04.322401  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:04.322723  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:04.322781  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:04.822229  522827 type.go:168] "Request Body" body=""
	I1217 20:30:04.822306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:04.822675  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:05.322398  522827 type.go:168] "Request Body" body=""
	I1217 20:30:05.322478  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:05.322839  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:05.346115  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:30:05.408232  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:05.408289  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:05.408313  522827 retry.go:31] will retry after 10.078206747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:05.822854  522827 type.go:168] "Request Body" body=""
	I1217 20:30:05.822945  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:05.823317  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:06.323102  522827 type.go:168] "Request Body" body=""
	I1217 20:30:06.323172  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:06.323471  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:06.323519  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:06.822291  522827 type.go:168] "Request Body" body=""
	I1217 20:30:06.822371  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:06.822701  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:07.322790  522827 type.go:168] "Request Body" body=""
	I1217 20:30:07.322867  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:07.323162  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:07.703974  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:30:07.764647  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:07.764701  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:07.764721  522827 retry.go:31] will retry after 19.009086903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:07.822843  522827 type.go:168] "Request Body" body=""
	I1217 20:30:07.822915  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:07.823267  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:08.323079  522827 type.go:168] "Request Body" body=""
	I1217 20:30:08.323151  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:08.323471  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:08.822188  522827 type.go:168] "Request Body" body=""
	I1217 20:30:08.822263  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:08.822521  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:08.822572  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:09.322241  522827 type.go:168] "Request Body" body=""
	I1217 20:30:09.322340  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:09.322671  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:09.822374  522827 type.go:168] "Request Body" body=""
	I1217 20:30:09.822457  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:09.822805  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:10.322483  522827 type.go:168] "Request Body" body=""
	I1217 20:30:10.322552  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:10.322843  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:10.822281  522827 type.go:168] "Request Body" body=""
	I1217 20:30:10.822358  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:10.822652  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:10.822700  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:11.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:30:11.322352  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:11.322672  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:11.822207  522827 type.go:168] "Request Body" body=""
	I1217 20:30:11.822286  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:11.822549  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:12.322594  522827 type.go:168] "Request Body" body=""
	I1217 20:30:12.322674  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:12.322988  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:12.822976  522827 type.go:168] "Request Body" body=""
	I1217 20:30:12.823050  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:12.823410  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:12.823463  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:13.322144  522827 type.go:168] "Request Body" body=""
	I1217 20:30:13.322232  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:13.322521  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:13.822230  522827 type.go:168] "Request Body" body=""
	I1217 20:30:13.822307  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:13.822623  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:14.322243  522827 type.go:168] "Request Body" body=""
	I1217 20:30:14.322319  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:14.322666  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:14.822203  522827 type.go:168] "Request Body" body=""
	I1217 20:30:14.822311  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:14.822605  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:15.322246  522827 type.go:168] "Request Body" body=""
	I1217 20:30:15.322320  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:15.322647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:15.322700  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:15.487149  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:30:15.557091  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:15.557136  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:15.557155  522827 retry.go:31] will retry after 12.964696684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:15.822271  522827 type.go:168] "Request Body" body=""
	I1217 20:30:15.822351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:15.822662  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:16.322350  522827 type.go:168] "Request Body" body=""
	I1217 20:30:16.322453  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:16.322760  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:16.822273  522827 type.go:168] "Request Body" body=""
	I1217 20:30:16.822358  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:16.822736  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:17.322687  522827 type.go:168] "Request Body" body=""
	I1217 20:30:17.322762  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:17.323107  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:17.323170  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:17.822929  522827 type.go:168] "Request Body" body=""
	I1217 20:30:17.823010  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:17.823369  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:18.322156  522827 type.go:168] "Request Body" body=""
	I1217 20:30:18.322228  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:18.322549  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:18.822291  522827 type.go:168] "Request Body" body=""
	I1217 20:30:18.822375  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:18.822749  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:19.322331  522827 type.go:168] "Request Body" body=""
	I1217 20:30:19.322407  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:19.322669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:19.822262  522827 type.go:168] "Request Body" body=""
	I1217 20:30:19.822338  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:19.822665  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:19.822723  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:20.322409  522827 type.go:168] "Request Body" body=""
	I1217 20:30:20.322504  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:20.322816  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:20.822195  522827 type.go:168] "Request Body" body=""
	I1217 20:30:20.822272  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:20.822589  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:21.322282  522827 type.go:168] "Request Body" body=""
	I1217 20:30:21.322360  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:21.322714  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:21.822462  522827 type.go:168] "Request Body" body=""
	I1217 20:30:21.822537  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:21.822878  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:21.822935  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:22.322758  522827 type.go:168] "Request Body" body=""
	I1217 20:30:22.322831  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:22.323144  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:22.823099  522827 type.go:168] "Request Body" body=""
	I1217 20:30:22.823175  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:22.823543  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:23.322157  522827 type.go:168] "Request Body" body=""
	I1217 20:30:23.322244  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:23.322584  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:23.822276  522827 type.go:168] "Request Body" body=""
	I1217 20:30:23.822380  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:23.822675  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:24.322366  522827 type.go:168] "Request Body" body=""
	I1217 20:30:24.322440  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:24.322775  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:24.322830  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:24.822521  522827 type.go:168] "Request Body" body=""
	I1217 20:30:24.822606  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:24.822923  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:25.322198  522827 type.go:168] "Request Body" body=""
	I1217 20:30:25.322283  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:25.322621  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:25.822315  522827 type.go:168] "Request Body" body=""
	I1217 20:30:25.822391  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:25.822741  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:26.322318  522827 type.go:168] "Request Body" body=""
	I1217 20:30:26.322399  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:26.322716  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:26.774084  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:30:26.822641  522827 type.go:168] "Request Body" body=""
	I1217 20:30:26.822719  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:26.822976  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:26.823028  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:26.837910  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:26.841500  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:26.841530  522827 retry.go:31] will retry after 11.131595667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:27.322446  522827 type.go:168] "Request Body" body=""
	I1217 20:30:27.322527  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:27.322849  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:27.822542  522827 type.go:168] "Request Body" body=""
	I1217 20:30:27.822619  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:27.822938  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:28.322188  522827 type.go:168] "Request Body" body=""
	I1217 20:30:28.322255  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:28.322560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:28.523062  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:30:28.580613  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:28.584486  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:28.584522  522827 retry.go:31] will retry after 27.188888106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:28.822927  522827 type.go:168] "Request Body" body=""
	I1217 20:30:28.823014  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:28.823356  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:28.823415  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:29.323074  522827 type.go:168] "Request Body" body=""
	I1217 20:30:29.323146  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:29.323504  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:29.822233  522827 type.go:168] "Request Body" body=""
	I1217 20:30:29.822317  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:29.822584  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:30.322255  522827 type.go:168] "Request Body" body=""
	I1217 20:30:30.322340  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:30.322682  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:30.822323  522827 type.go:168] "Request Body" body=""
	I1217 20:30:30.822402  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:30.822702  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:31.322380  522827 type.go:168] "Request Body" body=""
	I1217 20:30:31.322461  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:31.322751  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:31.322805  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:31.822248  522827 type.go:168] "Request Body" body=""
	I1217 20:30:31.822328  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:31.822687  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:32.322527  522827 type.go:168] "Request Body" body=""
	I1217 20:30:32.322604  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:32.322970  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:32.822789  522827 type.go:168] "Request Body" body=""
	I1217 20:30:32.822862  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:32.823113  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:33.322853  522827 type.go:168] "Request Body" body=""
	I1217 20:30:33.322933  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:33.323261  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:33.323318  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:33.823136  522827 type.go:168] "Request Body" body=""
	I1217 20:30:33.823234  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:33.823604  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:34.322280  522827 type.go:168] "Request Body" body=""
	I1217 20:30:34.322349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:34.322657  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:34.822228  522827 type.go:168] "Request Body" body=""
	I1217 20:30:34.822303  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:34.822672  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:35.322420  522827 type.go:168] "Request Body" body=""
	I1217 20:30:35.322511  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:35.322908  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:35.822529  522827 type.go:168] "Request Body" body=""
	I1217 20:30:35.822596  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:35.822853  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:35.822892  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:36.322252  522827 type.go:168] "Request Body" body=""
	I1217 20:30:36.322343  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:36.322684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:36.822246  522827 type.go:168] "Request Body" body=""
	I1217 20:30:36.822323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:36.822689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:37.322549  522827 type.go:168] "Request Body" body=""
	I1217 20:30:37.322619  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:37.322889  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:37.822237  522827 type.go:168] "Request Body" body=""
	I1217 20:30:37.822321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:37.822668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:37.974039  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:30:38.040817  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:38.040869  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:38.040889  522827 retry.go:31] will retry after 31.049103728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:38.322172  522827 type.go:168] "Request Body" body=""
	I1217 20:30:38.322246  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:38.322560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:38.322614  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:38.822324  522827 type.go:168] "Request Body" body=""
	I1217 20:30:38.822398  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:38.822724  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:39.322269  522827 type.go:168] "Request Body" body=""
	I1217 20:30:39.322343  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:39.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:39.822351  522827 type.go:168] "Request Body" body=""
	I1217 20:30:39.822429  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:39.822784  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:40.322493  522827 type.go:168] "Request Body" body=""
	I1217 20:30:40.322565  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:40.322832  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:40.322881  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:40.822257  522827 type.go:168] "Request Body" body=""
	I1217 20:30:40.822336  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:40.822683  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:41.322402  522827 type.go:168] "Request Body" body=""
	I1217 20:30:41.322476  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:41.322798  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:41.822322  522827 type.go:168] "Request Body" body=""
	I1217 20:30:41.822410  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:41.822673  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:42.322668  522827 type.go:168] "Request Body" body=""
	I1217 20:30:42.322753  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:42.323078  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:42.323134  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:42.822880  522827 type.go:168] "Request Body" body=""
	I1217 20:30:42.822964  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:42.823451  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:43.322210  522827 type.go:168] "Request Body" body=""
	I1217 20:30:43.322284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:43.322583  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:43.822235  522827 type.go:168] "Request Body" body=""
	I1217 20:30:43.822309  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:43.822654  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:44.322386  522827 type.go:168] "Request Body" body=""
	I1217 20:30:44.322461  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:44.322790  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:44.822318  522827 type.go:168] "Request Body" body=""
	I1217 20:30:44.822384  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:44.822682  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:44.822724  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:45.322416  522827 type.go:168] "Request Body" body=""
	I1217 20:30:45.322496  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:45.322829  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:45.822255  522827 type.go:168] "Request Body" body=""
	I1217 20:30:45.822332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:45.822669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:46.322325  522827 type.go:168] "Request Body" body=""
	I1217 20:30:46.322400  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:46.322665  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:46.822380  522827 type.go:168] "Request Body" body=""
	I1217 20:30:46.822473  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:46.822817  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:46.822872  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:47.322661  522827 type.go:168] "Request Body" body=""
	I1217 20:30:47.322735  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:47.323065  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:47.822781  522827 type.go:168] "Request Body" body=""
	I1217 20:30:47.822857  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:47.823119  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:48.322897  522827 type.go:168] "Request Body" body=""
	I1217 20:30:48.322974  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:48.323345  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:48.823144  522827 type.go:168] "Request Body" body=""
	I1217 20:30:48.823234  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:48.823560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:48.823640  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:49.322261  522827 type.go:168] "Request Body" body=""
	I1217 20:30:49.322332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:49.322595  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:49.822322  522827 type.go:168] "Request Body" body=""
	I1217 20:30:49.822426  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:49.822794  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:50.322509  522827 type.go:168] "Request Body" body=""
	I1217 20:30:50.322590  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:50.322932  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:50.822546  522827 type.go:168] "Request Body" body=""
	I1217 20:30:50.822615  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:50.822946  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:51.322643  522827 type.go:168] "Request Body" body=""
	I1217 20:30:51.322718  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:51.323024  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:51.323070  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:51.822296  522827 type.go:168] "Request Body" body=""
	I1217 20:30:51.822374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:51.822687  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:52.322694  522827 type.go:168] "Request Body" body=""
	I1217 20:30:52.322784  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:52.323124  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:52.823081  522827 type.go:168] "Request Body" body=""
	I1217 20:30:52.823156  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:52.823526  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:53.322244  522827 type.go:168] "Request Body" body=""
	I1217 20:30:53.322320  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:53.322655  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:53.822344  522827 type.go:168] "Request Body" body=""
	I1217 20:30:53.822418  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:53.822697  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:53.822743  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:54.322237  522827 type.go:168] "Request Body" body=""
	I1217 20:30:54.322316  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:54.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:54.822361  522827 type.go:168] "Request Body" body=""
	I1217 20:30:54.822444  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:54.822766  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:55.322219  522827 type.go:168] "Request Body" body=""
	I1217 20:30:55.322295  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:55.322552  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:55.774295  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:30:55.822774  522827 type.go:168] "Request Body" body=""
	I1217 20:30:55.822854  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:55.823178  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:55.823237  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:55.835665  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:55.835703  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:55.835722  522827 retry.go:31] will retry after 28.301795669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:56.322365  522827 type.go:168] "Request Body" body=""
	I1217 20:30:56.322444  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:56.322778  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:56.822439  522827 type.go:168] "Request Body" body=""
	I1217 20:30:56.822508  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:56.822820  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:57.322747  522827 type.go:168] "Request Body" body=""
	I1217 20:30:57.322819  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:57.323147  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:57.822918  522827 type.go:168] "Request Body" body=""
	I1217 20:30:57.822997  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:57.823341  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:57.823393  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:58.322987  522827 type.go:168] "Request Body" body=""
	I1217 20:30:58.323064  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:58.323342  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:58.823128  522827 type.go:168] "Request Body" body=""
	I1217 20:30:58.823221  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:58.823576  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:59.322297  522827 type.go:168] "Request Body" body=""
	I1217 20:30:59.322372  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:59.322714  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:59.822279  522827 type.go:168] "Request Body" body=""
	I1217 20:30:59.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:59.822614  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:00.322456  522827 type.go:168] "Request Body" body=""
	I1217 20:31:00.322571  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:00.322881  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:00.322948  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:00.822606  522827 type.go:168] "Request Body" body=""
	I1217 20:31:00.822685  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:00.823029  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:01.322805  522827 type.go:168] "Request Body" body=""
	I1217 20:31:01.322882  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:01.323144  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:01.822946  522827 type.go:168] "Request Body" body=""
	I1217 20:31:01.823024  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:01.823411  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:02.322195  522827 type.go:168] "Request Body" body=""
	I1217 20:31:02.322276  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:02.322604  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:02.822463  522827 type.go:168] "Request Body" body=""
	I1217 20:31:02.822531  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:02.822797  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:02.822839  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:03.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:31:03.322304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:03.322643  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:03.822222  522827 type.go:168] "Request Body" body=""
	I1217 20:31:03.822303  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:03.822665  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:04.322217  522827 type.go:168] "Request Body" body=""
	I1217 20:31:04.322297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:04.322674  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:04.822410  522827 type.go:168] "Request Body" body=""
	I1217 20:31:04.822489  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:04.822833  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:04.822889  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:05.322559  522827 type.go:168] "Request Body" body=""
	I1217 20:31:05.322643  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:05.323009  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:05.822714  522827 type.go:168] "Request Body" body=""
	I1217 20:31:05.822789  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:05.823090  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:06.322858  522827 type.go:168] "Request Body" body=""
	I1217 20:31:06.322935  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:06.323252  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:06.823001  522827 type.go:168] "Request Body" body=""
	I1217 20:31:06.823088  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:06.823427  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:06.823482  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:07.322676  522827 type.go:168] "Request Body" body=""
	I1217 20:31:07.322779  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:07.323088  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:07.822882  522827 type.go:168] "Request Body" body=""
	I1217 20:31:07.822978  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:07.823462  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:08.322191  522827 type.go:168] "Request Body" body=""
	I1217 20:31:08.322284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:08.322582  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:08.822182  522827 type.go:168] "Request Body" body=""
	I1217 20:31:08.822262  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:08.822524  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:09.091155  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:31:09.152330  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:31:09.155944  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:31:09.156044  522827 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 20:31:09.322225  522827 type.go:168] "Request Body" body=""
	I1217 20:31:09.322322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:09.322666  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:09.322722  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:09.822389  522827 type.go:168] "Request Body" body=""
	I1217 20:31:09.822485  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:09.822808  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:10.322485  522827 type.go:168] "Request Body" body=""
	I1217 20:31:10.322557  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:10.322813  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:10.822228  522827 type.go:168] "Request Body" body=""
	I1217 20:31:10.822305  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:10.822670  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:11.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:31:11.322323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:11.322659  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:11.822317  522827 type.go:168] "Request Body" body=""
	I1217 20:31:11.822395  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:11.822653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:11.822709  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:12.322704  522827 type.go:168] "Request Body" body=""
	I1217 20:31:12.322778  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:12.323076  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:12.822968  522827 type.go:168] "Request Body" body=""
	I1217 20:31:12.823050  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:12.823387  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:13.323001  522827 type.go:168] "Request Body" body=""
	I1217 20:31:13.323088  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:13.323368  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:13.823235  522827 type.go:168] "Request Body" body=""
	I1217 20:31:13.823315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:13.823670  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:13.823726  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:14.322222  522827 type.go:168] "Request Body" body=""
	I1217 20:31:14.322295  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:14.322647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:14.822221  522827 type.go:168] "Request Body" body=""
	I1217 20:31:14.822300  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:14.822581  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:15.322323  522827 type.go:168] "Request Body" body=""
	I1217 20:31:15.322403  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:15.322715  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:15.822407  522827 type.go:168] "Request Body" body=""
	I1217 20:31:15.822512  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:15.822811  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:16.322304  522827 type.go:168] "Request Body" body=""
	I1217 20:31:16.322385  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:16.322637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:16.322683  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:16.822297  522827 type.go:168] "Request Body" body=""
	I1217 20:31:16.822416  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:16.822748  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:17.322737  522827 type.go:168] "Request Body" body=""
	I1217 20:31:17.322810  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:17.323096  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:17.822837  522827 type.go:168] "Request Body" body=""
	I1217 20:31:17.822931  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:17.823257  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:18.323065  522827 type.go:168] "Request Body" body=""
	I1217 20:31:18.323140  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:18.323508  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:18.323570  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:18.822258  522827 type.go:168] "Request Body" body=""
	I1217 20:31:18.822342  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:18.822689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:19.322395  522827 type.go:168] "Request Body" body=""
	I1217 20:31:19.322475  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:19.322822  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:19.822246  522827 type.go:168] "Request Body" body=""
	I1217 20:31:19.822344  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:19.822647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:20.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:31:20.322363  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:20.322714  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:20.822389  522827 type.go:168] "Request Body" body=""
	I1217 20:31:20.822466  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:20.822785  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:20.822834  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:21.322233  522827 type.go:168] "Request Body" body=""
	I1217 20:31:21.322331  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:21.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:21.822347  522827 type.go:168] "Request Body" body=""
	I1217 20:31:21.822422  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:21.822747  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:22.322631  522827 type.go:168] "Request Body" body=""
	I1217 20:31:22.322703  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:22.322965  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:22.822936  522827 type.go:168] "Request Body" body=""
	I1217 20:31:22.823012  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:22.823323  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:22.823370  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:23.323099  522827 type.go:168] "Request Body" body=""
	I1217 20:31:23.323180  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:23.323479  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:23.822130  522827 type.go:168] "Request Body" body=""
	I1217 20:31:23.822204  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:23.822471  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:24.138134  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:31:24.201991  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:31:24.202036  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:31:24.202117  522827 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 20:31:24.205262  522827 out.go:179] * Enabled addons: 
	I1217 20:31:24.208903  522827 addons.go:530] duration metric: took 1m41.560475312s for enable addons: enabled=[]
	I1217 20:31:24.322240  522827 type.go:168] "Request Body" body=""
	I1217 20:31:24.322318  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:24.322668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:24.822384  522827 type.go:168] "Request Body" body=""
	I1217 20:31:24.822478  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:24.822815  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:25.322366  522827 type.go:168] "Request Body" body=""
	I1217 20:31:25.322441  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:25.322753  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:25.322800  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:25.822458  522827 type.go:168] "Request Body" body=""
	I1217 20:31:25.822532  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:25.822902  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:26.322508  522827 type.go:168] "Request Body" body=""
	I1217 20:31:26.322584  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:26.322912  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:26.822194  522827 type.go:168] "Request Body" body=""
	I1217 20:31:26.822272  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:26.822592  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:27.322423  522827 type.go:168] "Request Body" body=""
	I1217 20:31:27.322530  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:27.322841  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:27.322894  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:27.822547  522827 type.go:168] "Request Body" body=""
	I1217 20:31:27.822621  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:27.822984  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:28.322302  522827 type.go:168] "Request Body" body=""
	I1217 20:31:28.322385  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:28.322685  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:28.822382  522827 type.go:168] "Request Body" body=""
	I1217 20:31:28.822464  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:28.822833  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:29.322567  522827 type.go:168] "Request Body" body=""
	I1217 20:31:29.322643  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:29.322987  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:29.323043  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:29.822734  522827 type.go:168] "Request Body" body=""
	I1217 20:31:29.822807  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:29.823076  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:30.322834  522827 type.go:168] "Request Body" body=""
	I1217 20:31:30.322906  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:30.323262  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:30.823096  522827 type.go:168] "Request Body" body=""
	I1217 20:31:30.823184  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:30.823505  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:31.322243  522827 type.go:168] "Request Body" body=""
	I1217 20:31:31.322319  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:31.322606  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:31.822221  522827 type.go:168] "Request Body" body=""
	I1217 20:31:31.822295  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:31.822614  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:31.822668  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:32.322585  522827 type.go:168] "Request Body" body=""
	I1217 20:31:32.322665  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:32.322989  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:32.822991  522827 type.go:168] "Request Body" body=""
	I1217 20:31:32.823063  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:32.823325  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:33.323053  522827 type.go:168] "Request Body" body=""
	I1217 20:31:33.323151  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:33.323496  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:33.822867  522827 type.go:168] "Request Body" body=""
	I1217 20:31:33.822946  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:33.823324  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:33.823391  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:34.323215  522827 type.go:168] "Request Body" body=""
	I1217 20:31:34.323300  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:34.323630  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:34.822311  522827 type.go:168] "Request Body" body=""
	I1217 20:31:34.822386  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:34.822717  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:35.322215  522827 type.go:168] "Request Body" body=""
	I1217 20:31:35.322293  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:35.322668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:35.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:31:35.822284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:35.822539  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:36.322256  522827 type.go:168] "Request Body" body=""
	I1217 20:31:36.322344  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:36.322708  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:36.322778  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:36.822306  522827 type.go:168] "Request Body" body=""
	I1217 20:31:36.822387  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:36.822729  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:37.322707  522827 type.go:168] "Request Body" body=""
	I1217 20:31:37.322775  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:37.323029  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:37.822287  522827 type.go:168] "Request Body" body=""
	I1217 20:31:37.822373  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:37.823676  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 20:31:38.322400  522827 type.go:168] "Request Body" body=""
	I1217 20:31:38.322477  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:38.322802  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:38.322850  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:38.822462  522827 type.go:168] "Request Body" body=""
	I1217 20:31:38.822552  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:38.822813  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:39.322538  522827 type.go:168] "Request Body" body=""
	I1217 20:31:39.322613  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:39.322992  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:39.822813  522827 type.go:168] "Request Body" body=""
	I1217 20:31:39.822889  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:39.823220  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:40.322969  522827 type.go:168] "Request Body" body=""
	I1217 20:31:40.323049  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:40.323311  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:40.323365  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:40.823132  522827 type.go:168] "Request Body" body=""
	I1217 20:31:40.823213  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:40.823556  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:41.322295  522827 type.go:168] "Request Body" body=""
	I1217 20:31:41.322379  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:41.322741  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:41.822257  522827 type.go:168] "Request Body" body=""
	I1217 20:31:41.822325  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:41.822584  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:42.322236  522827 type.go:168] "Request Body" body=""
	I1217 20:31:42.322354  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:42.322714  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:42.822359  522827 type.go:168] "Request Body" body=""
	I1217 20:31:42.822447  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:42.822773  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:42.822824  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:43.322198  522827 type.go:168] "Request Body" body=""
	I1217 20:31:43.322269  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:43.322552  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:43.822223  522827 type.go:168] "Request Body" body=""
	I1217 20:31:43.822299  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:43.822646  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:44.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:31:44.322310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:44.322646  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:44.822277  522827 type.go:168] "Request Body" body=""
	I1217 20:31:44.822351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:44.822615  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:45.322247  522827 type.go:168] "Request Body" body=""
	I1217 20:31:45.322349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:45.322649  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:45.322699  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:45.822364  522827 type.go:168] "Request Body" body=""
	I1217 20:31:45.822446  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:45.822791  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:46.322336  522827 type.go:168] "Request Body" body=""
	I1217 20:31:46.322408  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:46.322712  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:46.822435  522827 type.go:168] "Request Body" body=""
	I1217 20:31:46.822522  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:46.822879  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:47.322808  522827 type.go:168] "Request Body" body=""
	I1217 20:31:47.322888  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:47.323217  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:47.323277  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:47.823026  522827 type.go:168] "Request Body" body=""
	I1217 20:31:47.823100  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:47.823372  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:48.323164  522827 type.go:168] "Request Body" body=""
	I1217 20:31:48.323244  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:48.323562  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:48.822251  522827 type.go:168] "Request Body" body=""
	I1217 20:31:48.822329  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:48.822696  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:49.322381  522827 type.go:168] "Request Body" body=""
	I1217 20:31:49.322457  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:49.322785  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:49.822503  522827 type.go:168] "Request Body" body=""
	I1217 20:31:49.822582  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:49.822896  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:49.822946  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:50.322276  522827 type.go:168] "Request Body" body=""
	I1217 20:31:50.322366  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:50.322737  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:50.822198  522827 type.go:168] "Request Body" body=""
	I1217 20:31:50.822270  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:50.822542  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:51.322250  522827 type.go:168] "Request Body" body=""
	I1217 20:31:51.322336  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:51.322688  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:51.822279  522827 type.go:168] "Request Body" body=""
	I1217 20:31:51.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:51.822647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:52.322172  522827 type.go:168] "Request Body" body=""
	I1217 20:31:52.322269  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:52.322529  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:52.322584  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:52.822307  522827 type.go:168] "Request Body" body=""
	I1217 20:31:52.822381  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:52.822703  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:53.322352  522827 type.go:168] "Request Body" body=""
	I1217 20:31:53.322443  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:53.322765  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:53.822450  522827 type.go:168] "Request Body" body=""
	I1217 20:31:53.822519  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:53.822836  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:54.322259  522827 type.go:168] "Request Body" body=""
	I1217 20:31:54.322342  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:54.322677  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:54.322737  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:54.822413  522827 type.go:168] "Request Body" body=""
	I1217 20:31:54.822500  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:54.822844  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:55.322509  522827 type.go:168] "Request Body" body=""
	I1217 20:31:55.322590  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:55.322859  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:55.822209  522827 type.go:168] "Request Body" body=""
	I1217 20:31:55.822286  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:55.822615  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:56.322334  522827 type.go:168] "Request Body" body=""
	I1217 20:31:56.322412  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:56.322700  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:56.822188  522827 type.go:168] "Request Body" body=""
	I1217 20:31:56.822256  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:56.822570  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:56.822617  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:57.322493  522827 type.go:168] "Request Body" body=""
	I1217 20:31:57.322571  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:57.322891  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:57.822474  522827 type.go:168] "Request Body" body=""
	I1217 20:31:57.822550  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:57.822881  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:58.322311  522827 type.go:168] "Request Body" body=""
	I1217 20:31:58.322386  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:58.322639  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:58.822249  522827 type.go:168] "Request Body" body=""
	I1217 20:31:58.822326  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:58.822659  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:58.822714  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:59.322242  522827 type.go:168] "Request Body" body=""
	I1217 20:31:59.322321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:59.322689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:59.822316  522827 type.go:168] "Request Body" body=""
	I1217 20:31:59.822386  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:59.822717  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:00.322382  522827 type.go:168] "Request Body" body=""
	I1217 20:32:00.322473  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:00.322812  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:00.822290  522827 type.go:168] "Request Body" body=""
	I1217 20:32:00.822374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:00.822752  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:00.822812  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:01.322354  522827 type.go:168] "Request Body" body=""
	I1217 20:32:01.322434  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:01.322743  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:01.822229  522827 type.go:168] "Request Body" body=""
	I1217 20:32:01.822312  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:01.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:02.322687  522827 type.go:168] "Request Body" body=""
	I1217 20:32:02.322779  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:02.323110  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:02.823078  522827 type.go:168] "Request Body" body=""
	I1217 20:32:02.823185  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:02.823454  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:02.823500  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:03.322198  522827 type.go:168] "Request Body" body=""
	I1217 20:32:03.322280  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:03.322619  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:03.822356  522827 type.go:168] "Request Body" body=""
	I1217 20:32:03.822434  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:03.822736  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:04.322315  522827 type.go:168] "Request Body" body=""
	I1217 20:32:04.322389  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:04.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:04.822260  522827 type.go:168] "Request Body" body=""
	I1217 20:32:04.822366  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:04.822762  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:05.322471  522827 type.go:168] "Request Body" body=""
	I1217 20:32:05.322560  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:05.322916  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:05.322977  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:05.822615  522827 type.go:168] "Request Body" body=""
	I1217 20:32:05.822691  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:05.823031  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:06.322818  522827 type.go:168] "Request Body" body=""
	I1217 20:32:06.322895  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:06.323223  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:06.822995  522827 type.go:168] "Request Body" body=""
	I1217 20:32:06.823069  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:06.823419  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:07.322171  522827 type.go:168] "Request Body" body=""
	I1217 20:32:07.322242  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:07.322555  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:07.822236  522827 type.go:168] "Request Body" body=""
	I1217 20:32:07.822316  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:07.822639  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:07.822694  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:08.322234  522827 type.go:168] "Request Body" body=""
	I1217 20:32:08.322313  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:08.322610  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:08.822290  522827 type.go:168] "Request Body" body=""
	I1217 20:32:08.822368  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:08.822630  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:09.322201  522827 type.go:168] "Request Body" body=""
	I1217 20:32:09.322283  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:09.322629  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:09.822331  522827 type.go:168] "Request Body" body=""
	I1217 20:32:09.822412  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:09.822739  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:09.822812  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:10.322224  522827 type.go:168] "Request Body" body=""
	I1217 20:32:10.322297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:10.322657  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:10.822387  522827 type.go:168] "Request Body" body=""
	I1217 20:32:10.822470  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:10.822875  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:11.322246  522827 type.go:168] "Request Body" body=""
	I1217 20:32:11.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:11.322696  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:11.822377  522827 type.go:168] "Request Body" body=""
	I1217 20:32:11.822447  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:11.822730  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:12.322684  522827 type.go:168] "Request Body" body=""
	I1217 20:32:12.322757  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:12.323075  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:12.323135  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:12.823123  522827 type.go:168] "Request Body" body=""
	I1217 20:32:12.823215  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:12.823567  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:13.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:32:13.322361  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:13.322653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:13.822251  522827 type.go:168] "Request Body" body=""
	I1217 20:32:13.822330  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:13.822693  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:14.322242  522827 type.go:168] "Request Body" body=""
	I1217 20:32:14.322324  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:14.322673  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:14.822355  522827 type.go:168] "Request Body" body=""
	I1217 20:32:14.822428  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:14.822689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:14.822736  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:15.322245  522827 type.go:168] "Request Body" body=""
	I1217 20:32:15.322322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:15.322646  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:15.822224  522827 type.go:168] "Request Body" body=""
	I1217 20:32:15.822301  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:15.822625  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:16.322176  522827 type.go:168] "Request Body" body=""
	I1217 20:32:16.322257  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:16.322573  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:16.822265  522827 type.go:168] "Request Body" body=""
	I1217 20:32:16.822341  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:16.822656  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:17.322600  522827 type.go:168] "Request Body" body=""
	I1217 20:32:17.322693  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:17.323051  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:17.323108  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:17.822821  522827 type.go:168] "Request Body" body=""
	I1217 20:32:17.822890  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:17.823195  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:18.322987  522827 type.go:168] "Request Body" body=""
	I1217 20:32:18.323062  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:18.323387  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:18.823193  522827 type.go:168] "Request Body" body=""
	I1217 20:32:18.823271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:18.823632  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:19.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:32:19.322300  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:19.322563  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:19.822248  522827 type.go:168] "Request Body" body=""
	I1217 20:32:19.822332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:19.822689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:19.822743  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:20.322270  522827 type.go:168] "Request Body" body=""
	I1217 20:32:20.322351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:20.322706  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:20.822403  522827 type.go:168] "Request Body" body=""
	I1217 20:32:20.822483  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:20.822759  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:21.322436  522827 type.go:168] "Request Body" body=""
	I1217 20:32:21.322518  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:21.322864  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:21.822578  522827 type.go:168] "Request Body" body=""
	I1217 20:32:21.822655  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:21.823020  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:21.823078  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:22.322774  522827 type.go:168] "Request Body" body=""
	I1217 20:32:22.322847  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:22.323116  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:22.823126  522827 type.go:168] "Request Body" body=""
	I1217 20:32:22.823213  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:22.823625  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:23.322331  522827 type.go:168] "Request Body" body=""
	I1217 20:32:23.322407  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:23.322751  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:23.822449  522827 type.go:168] "Request Body" body=""
	I1217 20:32:23.822519  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:23.822856  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:24.322228  522827 type.go:168] "Request Body" body=""
	I1217 20:32:24.322310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:24.322653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:24.322710  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:24.822249  522827 type.go:168] "Request Body" body=""
	I1217 20:32:24.822326  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:24.822711  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:25.322197  522827 type.go:168] "Request Body" body=""
	I1217 20:32:25.322269  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:25.322562  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:25.822261  522827 type.go:168] "Request Body" body=""
	I1217 20:32:25.822338  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:25.822615  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:26.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:32:26.322347  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:26.322669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:26.822294  522827 type.go:168] "Request Body" body=""
	I1217 20:32:26.822375  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:26.822659  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:26.822711  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:27.322690  522827 type.go:168] "Request Body" body=""
	I1217 20:32:27.322770  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:27.323105  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:27.822647  522827 type.go:168] "Request Body" body=""
	I1217 20:32:27.822726  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:27.823033  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:28.322766  522827 type.go:168] "Request Body" body=""
	I1217 20:32:28.322834  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:28.323196  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:28.822977  522827 type.go:168] "Request Body" body=""
	I1217 20:32:28.823055  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:28.823384  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:28.823437  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:29.322124  522827 type.go:168] "Request Body" body=""
	I1217 20:32:29.322205  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:29.322530  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:29.822227  522827 type.go:168] "Request Body" body=""
	I1217 20:32:29.822306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:29.822567  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:30.322239  522827 type.go:168] "Request Body" body=""
	I1217 20:32:30.322320  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:30.322615  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:30.822251  522827 type.go:168] "Request Body" body=""
	I1217 20:32:30.822335  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:30.822684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:31.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:32:31.322297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:31.322575  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:31.322631  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:31.822242  522827 type.go:168] "Request Body" body=""
	I1217 20:32:31.822318  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:31.822645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:32.322646  522827 type.go:168] "Request Body" body=""
	I1217 20:32:32.322717  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:32.323066  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:32.822921  522827 type.go:168] "Request Body" body=""
	I1217 20:32:32.822993  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:32.823283  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:33.323063  522827 type.go:168] "Request Body" body=""
	I1217 20:32:33.323158  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:33.323500  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:33.323569  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:33.822260  522827 type.go:168] "Request Body" body=""
	I1217 20:32:33.822354  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:33.822685  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:34.322405  522827 type.go:168] "Request Body" body=""
	I1217 20:32:34.322478  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:34.322748  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:34.822278  522827 type.go:168] "Request Body" body=""
	I1217 20:32:34.822374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:34.822748  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:35.322476  522827 type.go:168] "Request Body" body=""
	I1217 20:32:35.322570  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:35.322893  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:35.822173  522827 type.go:168] "Request Body" body=""
	I1217 20:32:35.822243  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:35.822502  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:35.822542  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:36.322264  522827 type.go:168] "Request Body" body=""
	I1217 20:32:36.322345  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:36.322701  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:36.822410  522827 type.go:168] "Request Body" body=""
	I1217 20:32:36.822488  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:36.822823  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:37.322668  522827 type.go:168] "Request Body" body=""
	I1217 20:32:37.322737  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:37.322989  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:37.822848  522827 type.go:168] "Request Body" body=""
	I1217 20:32:37.822924  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:37.823275  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:37.823343  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:38.323095  522827 type.go:168] "Request Body" body=""
	I1217 20:32:38.323188  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:38.323541  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:38.822238  522827 type.go:168] "Request Body" body=""
	I1217 20:32:38.822315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:38.822608  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:39.322245  522827 type.go:168] "Request Body" body=""
	I1217 20:32:39.322381  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:39.322729  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:39.822442  522827 type.go:168] "Request Body" body=""
	I1217 20:32:39.822521  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:39.822851  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:40.322537  522827 type.go:168] "Request Body" body=""
	I1217 20:32:40.322611  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:40.322918  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:40.322971  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:40.822252  522827 type.go:168] "Request Body" body=""
	I1217 20:32:40.822327  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:40.822657  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:41.322382  522827 type.go:168] "Request Body" body=""
	I1217 20:32:41.322458  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:41.322791  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:41.822307  522827 type.go:168] "Request Body" body=""
	I1217 20:32:41.822377  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:41.822665  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:42.322693  522827 type.go:168] "Request Body" body=""
	I1217 20:32:42.322766  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:42.323102  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:42.323170  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:42.823022  522827 type.go:168] "Request Body" body=""
	I1217 20:32:42.823123  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:42.823479  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:43.322175  522827 type.go:168] "Request Body" body=""
	I1217 20:32:43.322271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:43.322523  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:43.822319  522827 type.go:168] "Request Body" body=""
	I1217 20:32:43.822415  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:43.822789  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:44.322263  522827 type.go:168] "Request Body" body=""
	I1217 20:32:44.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:44.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:44.822216  522827 type.go:168] "Request Body" body=""
	I1217 20:32:44.822287  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:44.822560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:44.822601  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:45.322398  522827 type.go:168] "Request Body" body=""
	I1217 20:32:45.322606  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:45.323117  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:45.823034  522827 type.go:168] "Request Body" body=""
	I1217 20:32:45.823140  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:45.823517  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:46.322220  522827 type.go:168] "Request Body" body=""
	I1217 20:32:46.322304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:46.322623  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:46.822261  522827 type.go:168] "Request Body" body=""
	I1217 20:32:46.822341  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:46.822687  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:46.822747  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:47.322536  522827 type.go:168] "Request Body" body=""
	I1217 20:32:47.322612  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:47.322939  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:47.822456  522827 type.go:168] "Request Body" body=""
	I1217 20:32:47.822529  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:47.822784  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:48.322237  522827 type.go:168] "Request Body" body=""
	I1217 20:32:48.322319  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:48.322675  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:48.822389  522827 type.go:168] "Request Body" body=""
	I1217 20:32:48.822473  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:48.822819  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:48.822885  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:49.322495  522827 type.go:168] "Request Body" body=""
	I1217 20:32:49.322569  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:49.322865  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:49.822558  522827 type.go:168] "Request Body" body=""
	I1217 20:32:49.822637  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:49.822970  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:50.322764  522827 type.go:168] "Request Body" body=""
	I1217 20:32:50.322842  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:50.323193  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:50.822930  522827 type.go:168] "Request Body" body=""
	I1217 20:32:50.823006  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:50.823301  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:50.823453  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:51.322133  522827 type.go:168] "Request Body" body=""
	I1217 20:32:51.322212  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:51.322566  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:51.822287  522827 type.go:168] "Request Body" body=""
	I1217 20:32:51.822362  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:51.822679  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:52.322645  522827 type.go:168] "Request Body" body=""
	I1217 20:32:52.322727  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:52.323054  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:52.823092  522827 type.go:168] "Request Body" body=""
	I1217 20:32:52.823172  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:52.823505  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:52.823559  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:53.322267  522827 type.go:168] "Request Body" body=""
	I1217 20:32:53.322354  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:53.322691  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:53.822229  522827 type.go:168] "Request Body" body=""
	I1217 20:32:53.822299  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:53.822601  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:54.322252  522827 type.go:168] "Request Body" body=""
	I1217 20:32:54.322338  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:54.322639  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:54.822220  522827 type.go:168] "Request Body" body=""
	I1217 20:32:54.822304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:54.822635  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:55.322307  522827 type.go:168] "Request Body" body=""
	I1217 20:32:55.322374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:55.322666  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:55.322723  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:55.822406  522827 type.go:168] "Request Body" body=""
	I1217 20:32:55.822481  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:55.822818  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:56.322513  522827 type.go:168] "Request Body" body=""
	I1217 20:32:56.322588  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:56.322929  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:56.822610  522827 type.go:168] "Request Body" body=""
	I1217 20:32:56.822683  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:56.823008  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:57.322760  522827 type.go:168] "Request Body" body=""
	I1217 20:32:57.322844  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:57.323218  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:57.323276  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:57.823035  522827 type.go:168] "Request Body" body=""
	I1217 20:32:57.823125  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:57.823456  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:58.322253  522827 type.go:168] "Request Body" body=""
	I1217 20:32:58.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:58.322631  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:58.822231  522827 type.go:168] "Request Body" body=""
	I1217 20:32:58.822313  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:58.822643  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:59.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:32:59.322307  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:59.322642  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:59.822186  522827 type.go:168] "Request Body" body=""
	I1217 20:32:59.822256  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:59.822567  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:59.822624  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:00.322321  522827 type.go:168] "Request Body" body=""
	I1217 20:33:00.322425  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:00.322741  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:00.822652  522827 type.go:168] "Request Body" body=""
	I1217 20:33:00.822731  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:00.823058  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:01.322828  522827 type.go:168] "Request Body" body=""
	I1217 20:33:01.322902  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:01.323234  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:01.823025  522827 type.go:168] "Request Body" body=""
	I1217 20:33:01.823111  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:01.823448  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:01.823507  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:02.322504  522827 type.go:168] "Request Body" body=""
	I1217 20:33:02.322584  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:02.322930  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:02.822578  522827 type.go:168] "Request Body" body=""
	I1217 20:33:02.822653  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:02.822924  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:03.322752  522827 type.go:168] "Request Body" body=""
	I1217 20:33:03.322834  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:03.323161  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:03.822980  522827 type.go:168] "Request Body" body=""
	I1217 20:33:03.823059  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:03.823424  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:04.322126  522827 type.go:168] "Request Body" body=""
	I1217 20:33:04.322197  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:04.322455  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:04.322500  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:04.822206  522827 type.go:168] "Request Body" body=""
	I1217 20:33:04.822286  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:04.822623  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:05.322331  522827 type.go:168] "Request Body" body=""
	I1217 20:33:05.322416  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:05.322767  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:05.822465  522827 type.go:168] "Request Body" body=""
	I1217 20:33:05.822544  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:05.822897  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:06.322236  522827 type.go:168] "Request Body" body=""
	I1217 20:33:06.322318  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:06.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:06.322719  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:06.822389  522827 type.go:168] "Request Body" body=""
	I1217 20:33:06.822469  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:06.822803  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:07.322597  522827 type.go:168] "Request Body" body=""
	I1217 20:33:07.322665  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:07.322926  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:07.822204  522827 type.go:168] "Request Body" body=""
	I1217 20:33:07.822282  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:07.822625  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:08.322315  522827 type.go:168] "Request Body" body=""
	I1217 20:33:08.322394  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:08.322734  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:08.322788  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:08.822200  522827 type.go:168] "Request Body" body=""
	I1217 20:33:08.822275  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:08.822538  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:09.322257  522827 type.go:168] "Request Body" body=""
	I1217 20:33:09.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:09.322703  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:09.822418  522827 type.go:168] "Request Body" body=""
	I1217 20:33:09.822497  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:09.822851  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:10.322301  522827 type.go:168] "Request Body" body=""
	I1217 20:33:10.322371  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:10.322635  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:10.822269  522827 type.go:168] "Request Body" body=""
	I1217 20:33:10.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:10.822626  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:10.822672  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:11.322251  522827 type.go:168] "Request Body" body=""
	I1217 20:33:11.322332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:11.322653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:11.822193  522827 type.go:168] "Request Body" body=""
	I1217 20:33:11.822295  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:11.822606  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:12.322610  522827 type.go:168] "Request Body" body=""
	I1217 20:33:12.322688  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:12.323024  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:12.822814  522827 type.go:168] "Request Body" body=""
	I1217 20:33:12.822898  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:12.823229  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:12.823291  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:13.323028  522827 type.go:168] "Request Body" body=""
	I1217 20:33:13.323108  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:13.323382  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:13.823191  522827 type.go:168] "Request Body" body=""
	I1217 20:33:13.823271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:13.823643  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:14.322366  522827 type.go:168] "Request Body" body=""
	I1217 20:33:14.322445  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:14.322788  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:14.822460  522827 type.go:168] "Request Body" body=""
	I1217 20:33:14.822538  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:14.822850  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:15.322243  522827 type.go:168] "Request Body" body=""
	I1217 20:33:15.322321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:15.322677  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:15.322736  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:15.822256  522827 type.go:168] "Request Body" body=""
	I1217 20:33:15.822335  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:15.822688  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:16.322376  522827 type.go:168] "Request Body" body=""
	I1217 20:33:16.322452  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:16.322776  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:16.822223  522827 type.go:168] "Request Body" body=""
	I1217 20:33:16.822299  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:16.822647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:17.322462  522827 type.go:168] "Request Body" body=""
	I1217 20:33:17.322538  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:17.322921  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:17.322982  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:17.822190  522827 type.go:168] "Request Body" body=""
	I1217 20:33:17.822267  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:17.822594  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:18.322239  522827 type.go:168] "Request Body" body=""
	I1217 20:33:18.322318  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:18.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:18.822360  522827 type.go:168] "Request Body" body=""
	I1217 20:33:18.822447  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:18.822810  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:19.322194  522827 type.go:168] "Request Body" body=""
	I1217 20:33:19.322274  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:19.322540  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:19.822224  522827 type.go:168] "Request Body" body=""
	I1217 20:33:19.822303  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:19.822648  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:19.822702  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:20.322363  522827 type.go:168] "Request Body" body=""
	I1217 20:33:20.322440  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:20.322810  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:20.822186  522827 type.go:168] "Request Body" body=""
	I1217 20:33:20.822289  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:20.822610  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:21.322209  522827 type.go:168] "Request Body" body=""
	I1217 20:33:21.322284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:21.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:21.822355  522827 type.go:168] "Request Body" body=""
	I1217 20:33:21.822454  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:21.822796  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:21.822847  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:22.322602  522827 type.go:168] "Request Body" body=""
	I1217 20:33:22.322708  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:22.322975  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:22.823014  522827 type.go:168] "Request Body" body=""
	I1217 20:33:22.823104  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:22.823484  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:23.322227  522827 type.go:168] "Request Body" body=""
	I1217 20:33:23.322304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:23.322655  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:23.822334  522827 type.go:168] "Request Body" body=""
	I1217 20:33:23.822402  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:23.822683  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:24.322240  522827 type.go:168] "Request Body" body=""
	I1217 20:33:24.322323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:24.322616  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:24.322662  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:24.822300  522827 type.go:168] "Request Body" body=""
	I1217 20:33:24.822380  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:24.822709  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:25.322192  522827 type.go:168] "Request Body" body=""
	I1217 20:33:25.322263  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:25.322513  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:25.822234  522827 type.go:168] "Request Body" body=""
	I1217 20:33:25.822315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:25.822664  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:26.322370  522827 type.go:168] "Request Body" body=""
	I1217 20:33:26.322443  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:26.322762  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:26.322816  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:26.822197  522827 type.go:168] "Request Body" body=""
	I1217 20:33:26.822271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:26.822589  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:27.322602  522827 type.go:168] "Request Body" body=""
	I1217 20:33:27.322684  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:27.323034  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:27.822840  522827 type.go:168] "Request Body" body=""
	I1217 20:33:27.822919  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:27.823295  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:28.323025  522827 type.go:168] "Request Body" body=""
	I1217 20:33:28.323101  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:28.323352  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:28.323391  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:28.823128  522827 type.go:168] "Request Body" body=""
	I1217 20:33:28.823210  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:28.823616  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:29.322300  522827 type.go:168] "Request Body" body=""
	I1217 20:33:29.322374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:29.322713  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:29.822206  522827 type.go:168] "Request Body" body=""
	I1217 20:33:29.822276  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:29.822536  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:30.322280  522827 type.go:168] "Request Body" body=""
	I1217 20:33:30.322356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:30.322680  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:30.822251  522827 type.go:168] "Request Body" body=""
	I1217 20:33:30.822327  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:30.822668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:30.822720  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:31.322220  522827 type.go:168] "Request Body" body=""
	I1217 20:33:31.322288  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:31.322537  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:31.822223  522827 type.go:168] "Request Body" body=""
	I1217 20:33:31.822304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:31.822622  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:32.322649  522827 type.go:168] "Request Body" body=""
	I1217 20:33:32.322726  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:32.323059  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:32.822867  522827 type.go:168] "Request Body" body=""
	I1217 20:33:32.822952  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:32.823248  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:32.823290  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:33.323108  522827 type.go:168] "Request Body" body=""
	I1217 20:33:33.323186  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:33.323537  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:33.822245  522827 type.go:168] "Request Body" body=""
	I1217 20:33:33.822321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:33.822647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:34.322173  522827 type.go:168] "Request Body" body=""
	I1217 20:33:34.322244  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:34.322543  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:34.822227  522827 type.go:168] "Request Body" body=""
	I1217 20:33:34.822310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:34.822637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:35.322393  522827 type.go:168] "Request Body" body=""
	I1217 20:33:35.322479  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:35.322809  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:35.322867  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:35.822191  522827 type.go:168] "Request Body" body=""
	I1217 20:33:35.822262  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:35.822572  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:36.322306  522827 type.go:168] "Request Body" body=""
	I1217 20:33:36.322382  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:36.322717  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:36.822442  522827 type.go:168] "Request Body" body=""
	I1217 20:33:36.822520  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:36.822854  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:37.322749  522827 type.go:168] "Request Body" body=""
	I1217 20:33:37.322816  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:37.323098  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:37.323140  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:37.822974  522827 type.go:168] "Request Body" body=""
	I1217 20:33:37.823045  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:37.823647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:38.322337  522827 type.go:168] "Request Body" body=""
	I1217 20:33:38.322414  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:38.322731  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:38.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:33:38.822273  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:38.822622  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:39.322260  522827 type.go:168] "Request Body" body=""
	I1217 20:33:39.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:39.322710  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:39.822274  522827 type.go:168] "Request Body" body=""
	I1217 20:33:39.822350  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:39.822691  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:39.822743  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:40.322386  522827 type.go:168] "Request Body" body=""
	I1217 20:33:40.322461  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:40.322777  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:40.822263  522827 type.go:168] "Request Body" body=""
	I1217 20:33:40.822339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:40.822619  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:41.322249  522827 type.go:168] "Request Body" body=""
	I1217 20:33:41.322340  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:41.322697  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:41.822374  522827 type.go:168] "Request Body" body=""
	I1217 20:33:41.822453  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:41.822786  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:41.822845  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:42.322518  522827 type.go:168] "Request Body" body=""
	I1217 20:33:42.322620  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:42.323128  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:42.823194  522827 type.go:168] "Request Body" body=""
	I1217 20:33:42.823280  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:42.823645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:43.322176  522827 type.go:168] "Request Body" body=""
	I1217 20:33:43.322242  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:43.322490  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:43.822197  522827 type.go:168] "Request Body" body=""
	I1217 20:33:43.822292  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:43.822663  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:44.322229  522827 type.go:168] "Request Body" body=""
	I1217 20:33:44.322308  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:44.322678  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:44.322735  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:44.822242  522827 type.go:168] "Request Body" body=""
	I1217 20:33:44.822312  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:44.822572  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:45.322246  522827 type.go:168] "Request Body" body=""
	I1217 20:33:45.322321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:45.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:45.822380  522827 type.go:168] "Request Body" body=""
	I1217 20:33:45.822458  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:45.822809  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:46.322495  522827 type.go:168] "Request Body" body=""
	I1217 20:33:46.322574  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:46.322896  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:46.322955  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:46.822620  522827 type.go:168] "Request Body" body=""
	I1217 20:33:46.822697  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:46.823021  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:47.322811  522827 type.go:168] "Request Body" body=""
	I1217 20:33:47.322892  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:47.323256  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:47.823109  522827 type.go:168] "Request Body" body=""
	I1217 20:33:47.823190  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:47.823487  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:48.322186  522827 type.go:168] "Request Body" body=""
	I1217 20:33:48.322263  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:48.322612  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:48.822323  522827 type.go:168] "Request Body" body=""
	I1217 20:33:48.822399  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:48.822726  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:48.822794  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:49.322196  522827 type.go:168] "Request Body" body=""
	I1217 20:33:49.322273  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:49.322588  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:49.822263  522827 type.go:168] "Request Body" body=""
	I1217 20:33:49.822348  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:49.822724  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:50.322473  522827 type.go:168] "Request Body" body=""
	I1217 20:33:50.322557  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:50.322925  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:50.822209  522827 type.go:168] "Request Body" body=""
	I1217 20:33:50.822284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:50.822560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:51.322238  522827 type.go:168] "Request Body" body=""
	I1217 20:33:51.322322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:51.322661  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:51.322714  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:51.822385  522827 type.go:168] "Request Body" body=""
	I1217 20:33:51.822483  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:51.822831  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:52.322696  522827 type.go:168] "Request Body" body=""
	I1217 20:33:52.322769  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:52.323046  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:52.823035  522827 type.go:168] "Request Body" body=""
	I1217 20:33:52.823114  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:52.823430  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:53.322170  522827 type.go:168] "Request Body" body=""
	I1217 20:33:53.322245  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:53.322568  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:53.822148  522827 type.go:168] "Request Body" body=""
	I1217 20:33:53.822225  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:53.822487  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:53.822527  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:54.322252  522827 type.go:168] "Request Body" body=""
	I1217 20:33:54.322346  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:54.322676  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:54.822391  522827 type.go:168] "Request Body" body=""
	I1217 20:33:54.822487  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:54.822807  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:55.322470  522827 type.go:168] "Request Body" body=""
	I1217 20:33:55.322551  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:55.322876  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:55.822280  522827 type.go:168] "Request Body" body=""
	I1217 20:33:55.822364  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:55.822753  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:55.822813  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:56.322272  522827 type.go:168] "Request Body" body=""
	I1217 20:33:56.322351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:56.322670  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:56.822314  522827 type.go:168] "Request Body" body=""
	I1217 20:33:56.822391  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:56.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:57.322710  522827 type.go:168] "Request Body" body=""
	I1217 20:33:57.322780  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:57.323117  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:57.822916  522827 type.go:168] "Request Body" body=""
	I1217 20:33:57.823001  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:57.823366  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:57.823421  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:58.323148  522827 type.go:168] "Request Body" body=""
	I1217 20:33:58.323218  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:58.323513  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:58.822212  522827 type.go:168] "Request Body" body=""
	I1217 20:33:58.822296  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:58.822656  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:59.322223  522827 type.go:168] "Request Body" body=""
	I1217 20:33:59.322305  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:59.322651  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:59.822221  522827 type.go:168] "Request Body" body=""
	I1217 20:33:59.822297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:59.822641  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:00.322298  522827 type.go:168] "Request Body" body=""
	I1217 20:34:00.322392  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:00.322726  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:00.322782  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:00.822577  522827 type.go:168] "Request Body" body=""
	I1217 20:34:00.822662  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:00.823038  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:01.322657  522827 type.go:168] "Request Body" body=""
	I1217 20:34:01.322731  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:01.323025  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:01.822880  522827 type.go:168] "Request Body" body=""
	I1217 20:34:01.822955  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:01.823320  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:02.323040  522827 type.go:168] "Request Body" body=""
	I1217 20:34:02.323124  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:02.323461  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:02.323514  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:02.822183  522827 type.go:168] "Request Body" body=""
	I1217 20:34:02.822254  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:02.822522  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:03.322245  522827 type.go:168] "Request Body" body=""
	I1217 20:34:03.322323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:03.322656  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:03.822270  522827 type.go:168] "Request Body" body=""
	I1217 20:34:03.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:03.822703  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:04.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:34:04.322351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:04.322622  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:04.822245  522827 type.go:168] "Request Body" body=""
	I1217 20:34:04.822344  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:04.822655  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:04.822707  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:05.322405  522827 type.go:168] "Request Body" body=""
	I1217 20:34:05.322482  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:05.322821  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:05.822296  522827 type.go:168] "Request Body" body=""
	I1217 20:34:05.822365  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:05.822641  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:06.322257  522827 type.go:168] "Request Body" body=""
	I1217 20:34:06.322357  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:06.322688  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:06.822277  522827 type.go:168] "Request Body" body=""
	I1217 20:34:06.822353  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:06.822641  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:07.322615  522827 type.go:168] "Request Body" body=""
	I1217 20:34:07.322701  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:07.322998  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:07.323048  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:07.822861  522827 type.go:168] "Request Body" body=""
	I1217 20:34:07.822938  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:07.823293  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:08.323117  522827 type.go:168] "Request Body" body=""
	I1217 20:34:08.323193  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:08.323537  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:08.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:34:08.822303  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:08.822638  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:09.322217  522827 type.go:168] "Request Body" body=""
	I1217 20:34:09.322290  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:09.322637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:09.822237  522827 type.go:168] "Request Body" body=""
	I1217 20:34:09.822317  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:09.822642  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:09.822697  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:10.322196  522827 type.go:168] "Request Body" body=""
	I1217 20:34:10.322271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:10.322575  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:10.822218  522827 type.go:168] "Request Body" body=""
	I1217 20:34:10.822302  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:10.822644  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:11.322351  522827 type.go:168] "Request Body" body=""
	I1217 20:34:11.322431  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:11.322804  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:11.822287  522827 type.go:168] "Request Body" body=""
	I1217 20:34:11.822357  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:11.822618  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:12.322611  522827 type.go:168] "Request Body" body=""
	I1217 20:34:12.322687  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:12.323025  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:12.323091  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:12.822902  522827 type.go:168] "Request Body" body=""
	I1217 20:34:12.822982  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:12.823336  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:13.323079  522827 type.go:168] "Request Body" body=""
	I1217 20:34:13.323153  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:13.323408  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:13.822161  522827 type.go:168] "Request Body" body=""
	I1217 20:34:13.822240  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:13.822575  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:14.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:34:14.322308  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:14.322650  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:14.822224  522827 type.go:168] "Request Body" body=""
	I1217 20:34:14.822298  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:14.822572  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:14.822622  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:15.322292  522827 type.go:168] "Request Body" body=""
	I1217 20:34:15.322381  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:15.322726  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:15.822430  522827 type.go:168] "Request Body" body=""
	I1217 20:34:15.822518  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:15.822853  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:16.322471  522827 type.go:168] "Request Body" body=""
	I1217 20:34:16.322546  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:16.322836  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:16.822523  522827 type.go:168] "Request Body" body=""
	I1217 20:34:16.822605  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:16.822901  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:16.822951  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:17.322790  522827 type.go:168] "Request Body" body=""
	I1217 20:34:17.322869  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:17.323207  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:17.822955  522827 type.go:168] "Request Body" body=""
	I1217 20:34:17.823029  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:17.823314  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:18.323135  522827 type.go:168] "Request Body" body=""
	I1217 20:34:18.323209  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:18.323507  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:18.822255  522827 type.go:168] "Request Body" body=""
	I1217 20:34:18.822334  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:18.822699  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:19.322387  522827 type.go:168] "Request Body" body=""
	I1217 20:34:19.322457  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:19.322762  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:19.322824  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:19.822236  522827 type.go:168] "Request Body" body=""
	I1217 20:34:19.822309  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:19.822629  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:20.322246  522827 type.go:168] "Request Body" body=""
	I1217 20:34:20.322329  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:20.322668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:20.822198  522827 type.go:168] "Request Body" body=""
	I1217 20:34:20.822275  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:20.822590  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:21.322284  522827 type.go:168] "Request Body" body=""
	I1217 20:34:21.322362  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:21.322710  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:21.822303  522827 type.go:168] "Request Body" body=""
	I1217 20:34:21.822382  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:21.822717  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:21.822772  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:22.322546  522827 type.go:168] "Request Body" body=""
	I1217 20:34:22.322615  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:22.322869  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:22.822850  522827 type.go:168] "Request Body" body=""
	I1217 20:34:22.822926  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:22.823275  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:23.323068  522827 type.go:168] "Request Body" body=""
	I1217 20:34:23.323142  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:23.323472  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:23.822173  522827 type.go:168] "Request Body" body=""
	I1217 20:34:23.822252  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:23.822565  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:24.322250  522827 type.go:168] "Request Body" body=""
	I1217 20:34:24.322333  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:24.322684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:24.322736  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:24.822303  522827 type.go:168] "Request Body" body=""
	I1217 20:34:24.822394  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:24.822738  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:25.322430  522827 type.go:168] "Request Body" body=""
	I1217 20:34:25.322506  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:25.322760  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:25.822245  522827 type.go:168] "Request Body" body=""
	I1217 20:34:25.822324  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:25.822671  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:26.322262  522827 type.go:168] "Request Body" body=""
	I1217 20:34:26.322344  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:26.322685  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:26.822350  522827 type.go:168] "Request Body" body=""
	I1217 20:34:26.822425  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:26.822723  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:26.822775  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:27.322731  522827 type.go:168] "Request Body" body=""
	I1217 20:34:27.322805  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:27.323135  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:27.822789  522827 type.go:168] "Request Body" body=""
	I1217 20:34:27.822869  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:27.823223  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:28.323014  522827 type.go:168] "Request Body" body=""
	I1217 20:34:28.323092  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:28.323358  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:28.823134  522827 type.go:168] "Request Body" body=""
	I1217 20:34:28.823222  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:28.823569  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:28.823650  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:29.322221  522827 type.go:168] "Request Body" body=""
	I1217 20:34:29.322302  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:29.322620  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:29.822198  522827 type.go:168] "Request Body" body=""
	I1217 20:34:29.822278  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:29.822544  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:30.322232  522827 type.go:168] "Request Body" body=""
	I1217 20:34:30.322310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:30.322633  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:30.822346  522827 type.go:168] "Request Body" body=""
	I1217 20:34:30.822427  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:30.822767  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:31.322434  522827 type.go:168] "Request Body" body=""
	I1217 20:34:31.322509  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:31.322812  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:31.322864  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:31.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:34:31.822308  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:31.822637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:32.322630  522827 type.go:168] "Request Body" body=""
	I1217 20:34:32.322703  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:32.323039  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:32.822905  522827 type.go:168] "Request Body" body=""
	I1217 20:34:32.822987  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:32.823335  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:33.323139  522827 type.go:168] "Request Body" body=""
	I1217 20:34:33.323215  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:33.323560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:33.323633  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:33.822242  522827 type.go:168] "Request Body" body=""
	I1217 20:34:33.822322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:33.822657  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:34.322213  522827 type.go:168] "Request Body" body=""
	I1217 20:34:34.322306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:34.322645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:34.822274  522827 type.go:168] "Request Body" body=""
	I1217 20:34:34.822351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:34.822676  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:35.322402  522827 type.go:168] "Request Body" body=""
	I1217 20:34:35.322487  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:35.322839  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:35.822515  522827 type.go:168] "Request Body" body=""
	I1217 20:34:35.822590  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:35.822930  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:35.822983  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:36.322255  522827 type.go:168] "Request Body" body=""
	I1217 20:34:36.322336  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:36.322707  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:36.822275  522827 type.go:168] "Request Body" body=""
	I1217 20:34:36.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:36.822697  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:37.322527  522827 type.go:168] "Request Body" body=""
	I1217 20:34:37.322599  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:37.322871  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:37.822227  522827 type.go:168] "Request Body" body=""
	I1217 20:34:37.822302  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:37.822644  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:38.322240  522827 type.go:168] "Request Body" body=""
	I1217 20:34:38.322315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:38.322686  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:38.322744  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:38.822377  522827 type.go:168] "Request Body" body=""
	I1217 20:34:38.822445  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:38.822700  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:39.322353  522827 type.go:168] "Request Body" body=""
	I1217 20:34:39.322436  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:39.322776  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:39.822486  522827 type.go:168] "Request Body" body=""
	I1217 20:34:39.822576  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:39.822923  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:40.322215  522827 type.go:168] "Request Body" body=""
	I1217 20:34:40.322285  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:40.322627  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:40.822320  522827 type.go:168] "Request Body" body=""
	I1217 20:34:40.822392  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:40.822751  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:40.822813  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:41.322501  522827 type.go:168] "Request Body" body=""
	I1217 20:34:41.322580  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:41.322864  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:41.822296  522827 type.go:168] "Request Body" body=""
	I1217 20:34:41.822379  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:41.822641  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:42.322620  522827 type.go:168] "Request Body" body=""
	I1217 20:34:42.322699  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:42.323049  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:42.822854  522827 type.go:168] "Request Body" body=""
	I1217 20:34:42.822937  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:42.823298  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:42.823352  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:43.322922  522827 type.go:168] "Request Body" body=""
	I1217 20:34:43.322997  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:43.323438  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:43.822136  522827 type.go:168] "Request Body" body=""
	I1217 20:34:43.822214  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:43.822552  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:44.322254  522827 type.go:168] "Request Body" body=""
	I1217 20:34:44.322328  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:44.322684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:44.822377  522827 type.go:168] "Request Body" body=""
	I1217 20:34:44.822446  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:44.822707  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:45.322396  522827 type.go:168] "Request Body" body=""
	I1217 20:34:45.322477  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:45.322826  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:45.322884  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:45.822533  522827 type.go:168] "Request Body" body=""
	I1217 20:34:45.822614  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:45.822967  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:46.322723  522827 type.go:168] "Request Body" body=""
	I1217 20:34:46.322799  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:46.323071  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:46.822878  522827 type.go:168] "Request Body" body=""
	I1217 20:34:46.822963  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:46.823309  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:47.322193  522827 type.go:168] "Request Body" body=""
	I1217 20:34:47.322271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:47.322594  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:47.822176  522827 type.go:168] "Request Body" body=""
	I1217 20:34:47.822253  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:47.822576  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:47.822624  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:48.322276  522827 type.go:168] "Request Body" body=""
	I1217 20:34:48.322360  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:48.322716  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:48.822204  522827 type.go:168] "Request Body" body=""
	I1217 20:34:48.822283  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:48.822652  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:49.322200  522827 type.go:168] "Request Body" body=""
	I1217 20:34:49.322273  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:49.322585  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:49.822198  522827 type.go:168] "Request Body" body=""
	I1217 20:34:49.822275  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:49.822635  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:49.822689  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:50.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:34:50.322310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:50.322638  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:50.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:34:50.822275  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:50.822586  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:51.322215  522827 type.go:168] "Request Body" body=""
	I1217 20:34:51.322292  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:51.322632  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:51.822330  522827 type.go:168] "Request Body" body=""
	I1217 20:34:51.822415  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:51.822753  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:51.822806  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:52.322585  522827 type.go:168] "Request Body" body=""
	I1217 20:34:52.322659  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:52.322934  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:52.822902  522827 type.go:168] "Request Body" body=""
	I1217 20:34:52.822975  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:52.823296  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:53.323063  522827 type.go:168] "Request Body" body=""
	I1217 20:34:53.323136  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:53.323470  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:53.822150  522827 type.go:168] "Request Body" body=""
	I1217 20:34:53.822229  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:53.822559  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:54.322241  522827 type.go:168] "Request Body" body=""
	I1217 20:34:54.322323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:54.322670  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:54.322729  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:54.822224  522827 type.go:168] "Request Body" body=""
	I1217 20:34:54.822309  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:54.822652  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:55.322322  522827 type.go:168] "Request Body" body=""
	I1217 20:34:55.322399  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:55.322652  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:55.822320  522827 type.go:168] "Request Body" body=""
	I1217 20:34:55.822399  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:55.822745  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:56.322224  522827 type.go:168] "Request Body" body=""
	I1217 20:34:56.322302  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:56.322656  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:56.822139  522827 type.go:168] "Request Body" body=""
	I1217 20:34:56.822217  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:56.822522  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:56.822571  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:57.322502  522827 type.go:168] "Request Body" body=""
	I1217 20:34:57.322575  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:57.322903  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:57.822521  522827 type.go:168] "Request Body" body=""
	I1217 20:34:57.822594  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:57.822915  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:58.322195  522827 type.go:168] "Request Body" body=""
	I1217 20:34:58.322269  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:58.322623  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:58.822280  522827 type.go:168] "Request Body" body=""
	I1217 20:34:58.822374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:58.822695  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:58.822745  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:59.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:34:59.322309  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:59.322669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:59.822371  522827 type.go:168] "Request Body" body=""
	I1217 20:34:59.822442  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:59.822756  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:00.322497  522827 type.go:168] "Request Body" body=""
	I1217 20:35:00.322580  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:00.322949  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:00.822996  522827 type.go:168] "Request Body" body=""
	I1217 20:35:00.823083  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:00.823467  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:00.823521  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:01.322212  522827 type.go:168] "Request Body" body=""
	I1217 20:35:01.322285  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:01.322553  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:01.822229  522827 type.go:168] "Request Body" body=""
	I1217 20:35:01.822306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:01.822627  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:02.322626  522827 type.go:168] "Request Body" body=""
	I1217 20:35:02.322709  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:02.323066  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:02.822977  522827 type.go:168] "Request Body" body=""
	I1217 20:35:02.823069  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:02.823348  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:03.323127  522827 type.go:168] "Request Body" body=""
	I1217 20:35:03.323211  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:03.323563  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:03.323642  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:03.822192  522827 type.go:168] "Request Body" body=""
	I1217 20:35:03.822280  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:03.822696  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:04.322196  522827 type.go:168] "Request Body" body=""
	I1217 20:35:04.322276  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:04.322589  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:04.822325  522827 type.go:168] "Request Body" body=""
	I1217 20:35:04.822408  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:04.822706  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:05.322377  522827 type.go:168] "Request Body" body=""
	I1217 20:35:05.322458  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:05.322803  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:05.822315  522827 type.go:168] "Request Body" body=""
	I1217 20:35:05.822428  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:05.822690  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:05.822728  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:06.322269  522827 type.go:168] "Request Body" body=""
	I1217 20:35:06.322367  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:06.322690  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:06.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:35:06.822331  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:06.822614  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:07.322502  522827 type.go:168] "Request Body" body=""
	I1217 20:35:07.322573  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:07.322817  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:07.822274  522827 type.go:168] "Request Body" body=""
	I1217 20:35:07.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:07.822698  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:07.822760  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:08.322442  522827 type.go:168] "Request Body" body=""
	I1217 20:35:08.322522  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:08.322845  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:08.822222  522827 type.go:168] "Request Body" body=""
	I1217 20:35:08.822296  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:08.822597  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:09.322251  522827 type.go:168] "Request Body" body=""
	I1217 20:35:09.322329  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:09.322650  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:09.822333  522827 type.go:168] "Request Body" body=""
	I1217 20:35:09.822408  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:09.822762  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:09.822817  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:10.322442  522827 type.go:168] "Request Body" body=""
	I1217 20:35:10.322538  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:10.322820  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:10.822275  522827 type.go:168] "Request Body" body=""
	I1217 20:35:10.822347  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:10.822634  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:11.322386  522827 type.go:168] "Request Body" body=""
	I1217 20:35:11.322464  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:11.322764  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:11.822226  522827 type.go:168] "Request Body" body=""
	I1217 20:35:11.822297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:11.822669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:12.322679  522827 type.go:168] "Request Body" body=""
	I1217 20:35:12.322763  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:12.323067  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:12.323113  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:12.822935  522827 type.go:168] "Request Body" body=""
	I1217 20:35:12.823024  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:12.823355  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:13.323128  522827 type.go:168] "Request Body" body=""
	I1217 20:35:13.323210  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:13.323540  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:13.822246  522827 type.go:168] "Request Body" body=""
	I1217 20:35:13.822355  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:13.822636  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:14.322330  522827 type.go:168] "Request Body" body=""
	I1217 20:35:14.322406  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:14.322743  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:14.822304  522827 type.go:168] "Request Body" body=""
	I1217 20:35:14.822372  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:14.822645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:14.822685  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:15.322347  522827 type.go:168] "Request Body" body=""
	I1217 20:35:15.322423  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:15.322800  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:15.822252  522827 type.go:168] "Request Body" body=""
	I1217 20:35:15.822332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:15.822662  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:16.322191  522827 type.go:168] "Request Body" body=""
	I1217 20:35:16.322260  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:16.322568  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:16.822255  522827 type.go:168] "Request Body" body=""
	I1217 20:35:16.822332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:16.822681  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:16.822743  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:17.322820  522827 type.go:168] "Request Body" body=""
	I1217 20:35:17.322896  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:17.323309  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:17.823040  522827 type.go:168] "Request Body" body=""
	I1217 20:35:17.823109  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:17.823374  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:18.323149  522827 type.go:168] "Request Body" body=""
	I1217 20:35:18.323236  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:18.323572  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:18.822287  522827 type.go:168] "Request Body" body=""
	I1217 20:35:18.822373  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:18.822708  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:18.822767  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:19.322441  522827 type.go:168] "Request Body" body=""
	I1217 20:35:19.322515  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:19.322786  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:19.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:35:19.822312  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:19.822602  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:20.322257  522827 type.go:168] "Request Body" body=""
	I1217 20:35:20.322333  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:20.322679  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:20.822345  522827 type.go:168] "Request Body" body=""
	I1217 20:35:20.822415  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:20.822676  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:21.322243  522827 type.go:168] "Request Body" body=""
	I1217 20:35:21.322326  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:21.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:21.322713  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:21.822270  522827 type.go:168] "Request Body" body=""
	I1217 20:35:21.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:21.822667  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:22.322750  522827 type.go:168] "Request Body" body=""
	I1217 20:35:22.322821  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:22.323094  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:22.823051  522827 type.go:168] "Request Body" body=""
	I1217 20:35:22.823129  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:22.823477  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:23.322217  522827 type.go:168] "Request Body" body=""
	I1217 20:35:23.322297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:23.322625  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:23.822303  522827 type.go:168] "Request Body" body=""
	I1217 20:35:23.822373  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:23.822637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:23.822680  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:24.322343  522827 type.go:168] "Request Body" body=""
	I1217 20:35:24.322422  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:24.322779  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:24.822483  522827 type.go:168] "Request Body" body=""
	I1217 20:35:24.822568  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:24.822893  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:25.322199  522827 type.go:168] "Request Body" body=""
	I1217 20:35:25.322274  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:25.322559  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:25.822235  522827 type.go:168] "Request Body" body=""
	I1217 20:35:25.822320  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:25.822638  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:26.322257  522827 type.go:168] "Request Body" body=""
	I1217 20:35:26.322337  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:26.322663  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:26.322718  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:26.822248  522827 type.go:168] "Request Body" body=""
	I1217 20:35:26.822315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:26.822587  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:27.322563  522827 type.go:168] "Request Body" body=""
	I1217 20:35:27.322640  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:27.322979  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:27.822237  522827 type.go:168] "Request Body" body=""
	I1217 20:35:27.822313  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:27.822672  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:28.322358  522827 type.go:168] "Request Body" body=""
	I1217 20:35:28.322427  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:28.322726  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:28.322768  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:28.822428  522827 type.go:168] "Request Body" body=""
	I1217 20:35:28.822502  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:28.822834  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:29.322244  522827 type.go:168] "Request Body" body=""
	I1217 20:35:29.322327  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:29.322664  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:29.822221  522827 type.go:168] "Request Body" body=""
	I1217 20:35:29.822293  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:29.822604  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:30.322244  522827 type.go:168] "Request Body" body=""
	I1217 20:35:30.322319  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:30.322684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:30.822264  522827 type.go:168] "Request Body" body=""
	I1217 20:35:30.822339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:30.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:30.822715  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:31.322385  522827 type.go:168] "Request Body" body=""
	I1217 20:35:31.322460  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:31.322798  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:31.822531  522827 type.go:168] "Request Body" body=""
	I1217 20:35:31.822610  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:31.822946  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:32.322713  522827 type.go:168] "Request Body" body=""
	I1217 20:35:32.322793  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:32.323145  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:32.822950  522827 type.go:168] "Request Body" body=""
	I1217 20:35:32.823025  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:32.823278  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:32.823318  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:33.323110  522827 type.go:168] "Request Body" body=""
	I1217 20:35:33.323192  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:33.323540  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:33.822246  522827 type.go:168] "Request Body" body=""
	I1217 20:35:33.822323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:33.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:34.322218  522827 type.go:168] "Request Body" body=""
	I1217 20:35:34.322300  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:34.322661  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:34.822268  522827 type.go:168] "Request Body" body=""
	I1217 20:35:34.822347  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:34.822680  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:35.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:35:35.322307  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:35.322640  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:35.322702  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:35.822209  522827 type.go:168] "Request Body" body=""
	I1217 20:35:35.822278  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:35.822595  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:36.322264  522827 type.go:168] "Request Body" body=""
	I1217 20:35:36.322343  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:36.322666  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:36.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:35:36.822306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:36.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:37.322496  522827 type.go:168] "Request Body" body=""
	I1217 20:35:37.322571  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:37.322824  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:37.322862  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:37.822509  522827 type.go:168] "Request Body" body=""
	I1217 20:35:37.822586  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:37.822928  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:38.322513  522827 type.go:168] "Request Body" body=""
	I1217 20:35:38.322595  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:38.323137  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:38.822886  522827 type.go:168] "Request Body" body=""
	I1217 20:35:38.822959  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:38.823295  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:39.323106  522827 type.go:168] "Request Body" body=""
	I1217 20:35:39.323188  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:39.323560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:39.323633  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:39.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:35:39.822276  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:39.822619  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:40.322173  522827 type.go:168] "Request Body" body=""
	I1217 20:35:40.322246  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:40.322545  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:40.822242  522827 type.go:168] "Request Body" body=""
	I1217 20:35:40.822317  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:40.822754  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:41.322470  522827 type.go:168] "Request Body" body=""
	I1217 20:35:41.322556  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:41.322901  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:41.822209  522827 type.go:168] "Request Body" body=""
	I1217 20:35:41.822282  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:41.822536  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:41.822583  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:42.322519  522827 type.go:168] "Request Body" body=""
	I1217 20:35:42.322603  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:42.322998  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:42.822247  522827 type.go:168] "Request Body" body=""
	I1217 20:35:42.822336  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:42.822693  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:43.322188  522827 type.go:168] "Request Body" body=""
	I1217 20:35:43.322249  522827 node_ready.go:38] duration metric: took 6m0.000239045s for node "functional-655452" to be "Ready" ...
	I1217 20:35:43.325291  522827 out.go:203] 
	W1217 20:35:43.328188  522827 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 20:35:43.328206  522827 out.go:285] * 
	* 
	W1217 20:35:43.330331  522827 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:35:43.333111  522827 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-655452 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.779136841s for "functional-655452" cluster.
I1217 20:35:43.964758  488412 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-655452
helpers_test.go:244: (dbg) docker inspect functional-655452:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	        "Created": "2025-12-17T20:21:23.127111799Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 517339,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:21:23.205943598Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hosts",
	        "LogPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f-json.log",
	        "Name": "/functional-655452",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-655452:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-655452",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	                "LowerDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8-init/diff:/var/lib/docker/overlay2/c4759b344ecb109c83d66e4ef56c76903f1d1e597efb86ab2d2c7911b1130a8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-655452",
	                "Source": "/var/lib/docker/volumes/functional-655452/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-655452",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-655452",
	                "name.minikube.sigs.k8s.io": "functional-655452",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "77ae7dbcf69b3201be4ce65a88d235dbad078142318e01a5939425f9766fb924",
	            "SandboxKey": "/var/run/docker/netns/77ae7dbcf69b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-655452": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:c6:98:b5:58:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b68cd6e69ccdb81b0870dd976e2d5401ca696480842d2c469293121d59435cfb",
	                    "EndpointID": "82926bb4b08b5f66131c1026a8bc72a582e9cb8c6d97a278a494965923f47cb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-655452",
	                        "ce7a6a54d14f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452: exit status 2 (371.045837ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-655452 logs -n 25: (1.136013707s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-643319 ssh findmnt -T /mount-9p | grep 9p                                                                                            │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ ssh            │ functional-643319 ssh findmnt -T /mount-9p | grep 9p                                                                                            │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ ssh            │ functional-643319 ssh -- ls -la /mount-9p                                                                                                       │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ ssh            │ functional-643319 ssh sudo umount -f /mount-9p                                                                                                  │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ mount          │ -p functional-643319 /tmp/TestFunctionalparallelMountCmdVerifyCleanup781964025/001:/mount2 --alsologtostderr -v=1                               │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ mount          │ -p functional-643319 /tmp/TestFunctionalparallelMountCmdVerifyCleanup781964025/001:/mount1 --alsologtostderr -v=1                               │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ mount          │ -p functional-643319 /tmp/TestFunctionalparallelMountCmdVerifyCleanup781964025/001:/mount3 --alsologtostderr -v=1                               │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ ssh            │ functional-643319 ssh findmnt -T /mount1                                                                                                        │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ ssh            │ functional-643319 ssh findmnt -T /mount1                                                                                                        │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ ssh            │ functional-643319 ssh findmnt -T /mount2                                                                                                        │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ ssh            │ functional-643319 ssh findmnt -T /mount3                                                                                                        │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ mount          │ -p functional-643319 --kill=true                                                                                                                │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ update-context │ functional-643319 update-context --alsologtostderr -v=2                                                                                         │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ update-context │ functional-643319 update-context --alsologtostderr -v=2                                                                                         │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ update-context │ functional-643319 update-context --alsologtostderr -v=2                                                                                         │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image          │ functional-643319 image ls --format short --alsologtostderr                                                                                     │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image          │ functional-643319 image ls --format yaml --alsologtostderr                                                                                      │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ ssh            │ functional-643319 ssh pgrep buildkitd                                                                                                           │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ image          │ functional-643319 image build -t localhost/my-image:functional-643319 testdata/build --alsologtostderr                                          │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image          │ functional-643319 image ls --format json --alsologtostderr                                                                                      │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image          │ functional-643319 image ls --format table --alsologtostderr                                                                                     │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image          │ functional-643319 image ls                                                                                                                      │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ delete         │ -p functional-643319                                                                                                                            │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ start          │ -p functional-655452 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ start          │ -p functional-655452 --alsologtostderr -v=8                                                                                                     │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:29 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:29:37
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:29:37.230217  522827 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:29:37.230338  522827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:29:37.230348  522827 out.go:374] Setting ErrFile to fd 2...
	I1217 20:29:37.230354  522827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:29:37.230641  522827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:29:37.231040  522827 out.go:368] Setting JSON to false
	I1217 20:29:37.231956  522827 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11527,"bootTime":1765991851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:29:37.232033  522827 start.go:143] virtualization:  
	I1217 20:29:37.235360  522827 out.go:179] * [functional-655452] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:29:37.239166  522827 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:29:37.239533  522827 notify.go:221] Checking for updates...
	I1217 20:29:37.245507  522827 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:29:37.248369  522827 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:37.251209  522827 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:29:37.254179  522827 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:29:37.257129  522827 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:29:37.260562  522827 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:29:37.260726  522827 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:29:37.289208  522827 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:29:37.289391  522827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:29:37.344995  522827 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 20:29:37.33566048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:29:37.345107  522827 docker.go:319] overlay module found
	I1217 20:29:37.348246  522827 out.go:179] * Using the docker driver based on existing profile
	I1217 20:29:37.351193  522827 start.go:309] selected driver: docker
	I1217 20:29:37.351220  522827 start.go:927] validating driver "docker" against &{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:29:37.351378  522827 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:29:37.351479  522827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:29:37.406404  522827 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 20:29:37.397152083 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:29:37.406839  522827 cni.go:84] Creating CNI manager for ""
	I1217 20:29:37.406903  522827 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:29:37.406958  522827 start.go:353] cluster config:
	{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:29:37.410074  522827 out.go:179] * Starting "functional-655452" primary control-plane node in "functional-655452" cluster
	I1217 20:29:37.413044  522827 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:29:37.415960  522827 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:29:37.418922  522827 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:29:37.418997  522827 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1217 20:29:37.419012  522827 cache.go:65] Caching tarball of preloaded images
	I1217 20:29:37.419028  522827 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:29:37.419099  522827 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:29:37.419110  522827 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 20:29:37.419218  522827 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/config.json ...
	I1217 20:29:37.438883  522827 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:29:37.438908  522827 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:29:37.438929  522827 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:29:37.438964  522827 start.go:360] acquireMachinesLock for functional-655452: {Name:mk8b480106227149e1b79da3c4f580b9dc5e0f96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:29:37.439024  522827 start.go:364] duration metric: took 37.399µs to acquireMachinesLock for "functional-655452"
	I1217 20:29:37.439047  522827 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:29:37.439057  522827 fix.go:54] fixHost starting: 
	I1217 20:29:37.439341  522827 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:29:37.456072  522827 fix.go:112] recreateIfNeeded on functional-655452: state=Running err=<nil>
	W1217 20:29:37.456113  522827 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:29:37.459179  522827 out.go:252] * Updating the running docker "functional-655452" container ...
	I1217 20:29:37.459210  522827 machine.go:94] provisionDockerMachine start ...
	I1217 20:29:37.459290  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:37.476101  522827 main.go:143] libmachine: Using SSH client type: native
	I1217 20:29:37.476449  522827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:29:37.476466  522827 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:29:37.607148  522827 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-655452
	
	I1217 20:29:37.607176  522827 ubuntu.go:182] provisioning hostname "functional-655452"
	I1217 20:29:37.607253  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:37.625523  522827 main.go:143] libmachine: Using SSH client type: native
	I1217 20:29:37.625850  522827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:29:37.625869  522827 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-655452 && echo "functional-655452" | sudo tee /etc/hostname
	I1217 20:29:37.765012  522827 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-655452
	
	I1217 20:29:37.765095  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:37.783574  522827 main.go:143] libmachine: Using SSH client type: native
	I1217 20:29:37.784233  522827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:29:37.784256  522827 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-655452' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-655452/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-655452' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:29:37.923858  522827 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:29:37.923885  522827 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:29:37.923918  522827 ubuntu.go:190] setting up certificates
	I1217 20:29:37.923930  522827 provision.go:84] configureAuth start
	I1217 20:29:37.923995  522827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-655452
	I1217 20:29:37.942198  522827 provision.go:143] copyHostCerts
	I1217 20:29:37.942245  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:29:37.942294  522827 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:29:37.942308  522827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:29:37.942385  522827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:29:37.942483  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:29:37.942506  522827 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:29:37.942510  522827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:29:37.942538  522827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:29:37.942584  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:29:37.942605  522827 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:29:37.942613  522827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:29:37.942638  522827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:29:37.942696  522827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.functional-655452 san=[127.0.0.1 192.168.49.2 functional-655452 localhost minikube]
	I1217 20:29:38.205373  522827 provision.go:177] copyRemoteCerts
	I1217 20:29:38.205444  522827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:29:38.205488  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:38.222940  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:38.324557  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 20:29:38.324643  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:29:38.342369  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 20:29:38.342442  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 20:29:38.361702  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 20:29:38.361816  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:29:38.379229  522827 provision.go:87] duration metric: took 455.281269ms to configureAuth
	I1217 20:29:38.379306  522827 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:29:38.379506  522827 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:29:38.379650  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:38.397098  522827 main.go:143] libmachine: Using SSH client type: native
	I1217 20:29:38.397425  522827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:29:38.397449  522827 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:29:38.710104  522827 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:29:38.710129  522827 machine.go:97] duration metric: took 1.250909554s to provisionDockerMachine
	I1217 20:29:38.710141  522827 start.go:293] postStartSetup for "functional-655452" (driver="docker")
	I1217 20:29:38.710173  522827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:29:38.710243  522827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:29:38.710290  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:38.729105  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:38.823561  522827 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:29:38.826921  522827 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 20:29:38.826944  522827 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 20:29:38.826949  522827 command_runner.go:130] > VERSION_ID="12"
	I1217 20:29:38.826954  522827 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 20:29:38.826958  522827 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 20:29:38.826962  522827 command_runner.go:130] > ID=debian
	I1217 20:29:38.826966  522827 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 20:29:38.826971  522827 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 20:29:38.826976  522827 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 20:29:38.827033  522827 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:29:38.827056  522827 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:29:38.827068  522827 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:29:38.827127  522827 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:29:38.827213  522827 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:29:38.827224  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /etc/ssl/certs/4884122.pem
	I1217 20:29:38.827310  522827 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts -> hosts in /etc/test/nested/copy/488412
	I1217 20:29:38.827318  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts -> /etc/test/nested/copy/488412/hosts
	I1217 20:29:38.827361  522827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/488412
	I1217 20:29:38.835073  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:29:38.853051  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts --> /etc/test/nested/copy/488412/hosts (40 bytes)
	I1217 20:29:38.870277  522827 start.go:296] duration metric: took 160.119138ms for postStartSetup
	I1217 20:29:38.870416  522827 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:29:38.870497  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:38.887313  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:38.980667  522827 command_runner.go:130] > 14%
	I1217 20:29:38.980748  522827 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:29:38.985147  522827 command_runner.go:130] > 169G
	I1217 20:29:38.985687  522827 fix.go:56] duration metric: took 1.546626529s for fixHost
	I1217 20:29:38.985712  522827 start.go:83] releasing machines lock for "functional-655452", held for 1.546675825s
	I1217 20:29:38.985789  522827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-655452
	I1217 20:29:39.004882  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:29:39.004958  522827 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:29:39.004969  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:29:39.005005  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:29:39.005049  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:29:39.005073  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:29:39.005126  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:29:39.005177  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.005197  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.005217  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.005238  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:29:39.005294  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:39.023309  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:39.128919  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:29:39.146238  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:29:39.163663  522827 ssh_runner.go:195] Run: openssl version
	I1217 20:29:39.169395  522827 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 20:29:39.169821  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.177042  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:29:39.184227  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.187671  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.187835  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.187899  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.232645  522827 command_runner.go:130] > 51391683
	I1217 20:29:39.233156  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:29:39.240764  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.248070  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:29:39.256139  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.260468  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.260613  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.260717  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.301324  522827 command_runner.go:130] > 3ec20f2e
	I1217 20:29:39.301774  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:29:39.309564  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.316908  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:29:39.330430  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.334931  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.335647  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.335725  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.377554  522827 command_runner.go:130] > b5213941
	I1217 20:29:39.378955  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:29:39.389619  522827 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:29:39.393257  522827 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 20:29:39.396841  522827 ssh_runner.go:195] Run: cat /version.json
	I1217 20:29:39.396923  522827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:29:39.487006  522827 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1217 20:29:39.489563  522827 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 20:29:39.489734  522827 ssh_runner.go:195] Run: systemctl --version
	I1217 20:29:39.495686  522827 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 20:29:39.495789  522827 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 20:29:39.496199  522827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:29:39.531768  522827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 20:29:39.536045  522827 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 20:29:39.536498  522827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:29:39.536609  522827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:29:39.544584  522827 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:29:39.544609  522827 start.go:496] detecting cgroup driver to use...
	I1217 20:29:39.544639  522827 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:29:39.544686  522827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:29:39.559677  522827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:29:39.572537  522827 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:29:39.572629  522827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:29:39.588063  522827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:29:39.601417  522827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:29:39.711338  522827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:29:39.828534  522827 docker.go:234] disabling docker service ...
	I1217 20:29:39.828602  522827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:29:39.843450  522827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:29:39.856661  522827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:29:39.988443  522827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:29:40.133139  522827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:29:40.147217  522827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:29:40.161697  522827 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1217 20:29:40.163096  522827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:29:40.163182  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.173178  522827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:29:40.173338  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.182803  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.192168  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.201463  522827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:29:40.209602  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.218600  522827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.227088  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.236327  522827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:29:40.243154  522827 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 20:29:40.244193  522827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:29:40.251635  522827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:29:40.361488  522827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:29:40.546740  522827 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:29:40.546847  522827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:29:40.551021  522827 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1217 20:29:40.551089  522827 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 20:29:40.551102  522827 command_runner.go:130] > Device: 0,72	Inode: 1636        Links: 1
	I1217 20:29:40.551127  522827 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 20:29:40.551137  522827 command_runner.go:130] > Access: 2025-12-17 20:29:40.478294705 +0000
	I1217 20:29:40.551143  522827 command_runner.go:130] > Modify: 2025-12-17 20:29:40.478294705 +0000
	I1217 20:29:40.551149  522827 command_runner.go:130] > Change: 2025-12-17 20:29:40.478294705 +0000
	I1217 20:29:40.551152  522827 command_runner.go:130] >  Birth: -
	I1217 20:29:40.551189  522827 start.go:564] Will wait 60s for crictl version
	I1217 20:29:40.551247  522827 ssh_runner.go:195] Run: which crictl
	I1217 20:29:40.554786  522827 command_runner.go:130] > /usr/local/bin/crictl
	I1217 20:29:40.554923  522827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:29:40.577444  522827 command_runner.go:130] > Version:  0.1.0
	I1217 20:29:40.577470  522827 command_runner.go:130] > RuntimeName:  cri-o
	I1217 20:29:40.577476  522827 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1217 20:29:40.577491  522827 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 20:29:40.579694  522827 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:29:40.579819  522827 ssh_runner.go:195] Run: crio --version
	I1217 20:29:40.609324  522827 command_runner.go:130] > crio version 1.34.3
	I1217 20:29:40.609350  522827 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1217 20:29:40.609357  522827 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1217 20:29:40.609362  522827 command_runner.go:130] >    GitTreeState:   dirty
	I1217 20:29:40.609367  522827 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1217 20:29:40.609371  522827 command_runner.go:130] >    GoVersion:      go1.24.6
	I1217 20:29:40.609375  522827 command_runner.go:130] >    Compiler:       gc
	I1217 20:29:40.609382  522827 command_runner.go:130] >    Platform:       linux/arm64
	I1217 20:29:40.609386  522827 command_runner.go:130] >    Linkmode:       static
	I1217 20:29:40.609390  522827 command_runner.go:130] >    BuildTags:
	I1217 20:29:40.609393  522827 command_runner.go:130] >      static
	I1217 20:29:40.609397  522827 command_runner.go:130] >      netgo
	I1217 20:29:40.609401  522827 command_runner.go:130] >      osusergo
	I1217 20:29:40.609410  522827 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1217 20:29:40.609414  522827 command_runner.go:130] >      seccomp
	I1217 20:29:40.609421  522827 command_runner.go:130] >      apparmor
	I1217 20:29:40.609424  522827 command_runner.go:130] >      selinux
	I1217 20:29:40.609429  522827 command_runner.go:130] >    LDFlags:          unknown
	I1217 20:29:40.609433  522827 command_runner.go:130] >    SeccompEnabled:   true
	I1217 20:29:40.609441  522827 command_runner.go:130] >    AppArmorEnabled:  false
	I1217 20:29:40.609527  522827 ssh_runner.go:195] Run: crio --version
	I1217 20:29:40.638467  522827 command_runner.go:130] > crio version 1.34.3
	I1217 20:29:40.638491  522827 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1217 20:29:40.638499  522827 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1217 20:29:40.638505  522827 command_runner.go:130] >    GitTreeState:   dirty
	I1217 20:29:40.638509  522827 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1217 20:29:40.638516  522827 command_runner.go:130] >    GoVersion:      go1.24.6
	I1217 20:29:40.638520  522827 command_runner.go:130] >    Compiler:       gc
	I1217 20:29:40.638533  522827 command_runner.go:130] >    Platform:       linux/arm64
	I1217 20:29:40.638543  522827 command_runner.go:130] >    Linkmode:       static
	I1217 20:29:40.638547  522827 command_runner.go:130] >    BuildTags:
	I1217 20:29:40.638550  522827 command_runner.go:130] >      static
	I1217 20:29:40.638554  522827 command_runner.go:130] >      netgo
	I1217 20:29:40.638558  522827 command_runner.go:130] >      osusergo
	I1217 20:29:40.638568  522827 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1217 20:29:40.638572  522827 command_runner.go:130] >      seccomp
	I1217 20:29:40.638576  522827 command_runner.go:130] >      apparmor
	I1217 20:29:40.638583  522827 command_runner.go:130] >      selinux
	I1217 20:29:40.638587  522827 command_runner.go:130] >    LDFlags:          unknown
	I1217 20:29:40.638592  522827 command_runner.go:130] >    SeccompEnabled:   true
	I1217 20:29:40.638604  522827 command_runner.go:130] >    AppArmorEnabled:  false
	I1217 20:29:40.644077  522827 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 20:29:40.647046  522827 cli_runner.go:164] Run: docker network inspect functional-655452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:29:40.665190  522827 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:29:40.669398  522827 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1217 20:29:40.669593  522827 kubeadm.go:884] updating cluster {Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:29:40.669700  522827 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:29:40.669779  522827 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:29:40.704282  522827 command_runner.go:130] > {
	I1217 20:29:40.704302  522827 command_runner.go:130] >   "images":  [
	I1217 20:29:40.704307  522827 command_runner.go:130] >     {
	I1217 20:29:40.704316  522827 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 20:29:40.704321  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704328  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 20:29:40.704331  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704335  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704350  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1217 20:29:40.704362  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1217 20:29:40.704370  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704374  522827 command_runner.go:130] >       "size":  "111333938",
	I1217 20:29:40.704379  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704389  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704403  522827 command_runner.go:130] >     },
	I1217 20:29:40.704406  522827 command_runner.go:130] >     {
	I1217 20:29:40.704413  522827 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 20:29:40.704419  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704425  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 20:29:40.704429  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704433  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704445  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1217 20:29:40.704454  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 20:29:40.704460  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704464  522827 command_runner.go:130] >       "size":  "29037500",
	I1217 20:29:40.704468  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704476  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704482  522827 command_runner.go:130] >     },
	I1217 20:29:40.704485  522827 command_runner.go:130] >     {
	I1217 20:29:40.704494  522827 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 20:29:40.704503  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704509  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 20:29:40.704512  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704516  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704528  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1217 20:29:40.704536  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1217 20:29:40.704542  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704547  522827 command_runner.go:130] >       "size":  "74491780",
	I1217 20:29:40.704551  522827 command_runner.go:130] >       "username":  "nonroot",
	I1217 20:29:40.704556  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704561  522827 command_runner.go:130] >     },
	I1217 20:29:40.704568  522827 command_runner.go:130] >     {
	I1217 20:29:40.704579  522827 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1217 20:29:40.704583  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704588  522827 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1217 20:29:40.704594  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704598  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704605  522827 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890",
	I1217 20:29:40.704613  522827 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"
	I1217 20:29:40.704619  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704623  522827 command_runner.go:130] >       "size":  "60850387",
	I1217 20:29:40.704626  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.704630  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.704636  522827 command_runner.go:130] >       },
	I1217 20:29:40.704645  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704657  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704660  522827 command_runner.go:130] >     },
	I1217 20:29:40.704664  522827 command_runner.go:130] >     {
	I1217 20:29:40.704673  522827 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1217 20:29:40.704679  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704685  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1217 20:29:40.704689  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704693  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704704  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee",
	I1217 20:29:40.704721  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"
	I1217 20:29:40.704724  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704729  522827 command_runner.go:130] >       "size":  "85015535",
	I1217 20:29:40.704735  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.704739  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.704742  522827 command_runner.go:130] >       },
	I1217 20:29:40.704746  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704753  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704756  522827 command_runner.go:130] >     },
	I1217 20:29:40.704759  522827 command_runner.go:130] >     {
	I1217 20:29:40.704772  522827 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1217 20:29:40.704779  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704785  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1217 20:29:40.704788  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704793  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704803  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f",
	I1217 20:29:40.704813  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1217 20:29:40.704822  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704827  522827 command_runner.go:130] >       "size":  "72170325",
	I1217 20:29:40.704831  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.704835  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.704838  522827 command_runner.go:130] >       },
	I1217 20:29:40.704842  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704846  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704848  522827 command_runner.go:130] >     },
	I1217 20:29:40.704851  522827 command_runner.go:130] >     {
	I1217 20:29:40.704858  522827 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1217 20:29:40.704861  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704866  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1217 20:29:40.704870  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704875  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704883  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f",
	I1217 20:29:40.704894  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1217 20:29:40.704898  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704903  522827 command_runner.go:130] >       "size":  "74107287",
	I1217 20:29:40.704910  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704914  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704926  522827 command_runner.go:130] >     },
	I1217 20:29:40.704930  522827 command_runner.go:130] >     {
	I1217 20:29:40.704936  522827 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1217 20:29:40.704940  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704946  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1217 20:29:40.704949  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704963  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704975  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3",
	I1217 20:29:40.704993  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"
	I1217 20:29:40.705000  522827 command_runner.go:130] >       ],
	I1217 20:29:40.705005  522827 command_runner.go:130] >       "size":  "49822549",
	I1217 20:29:40.705008  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.705014  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.705017  522827 command_runner.go:130] >       },
	I1217 20:29:40.705025  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.705029  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.705033  522827 command_runner.go:130] >     },
	I1217 20:29:40.705036  522827 command_runner.go:130] >     {
	I1217 20:29:40.705043  522827 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 20:29:40.705055  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.705060  522827 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 20:29:40.705063  522827 command_runner.go:130] >       ],
	I1217 20:29:40.705068  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.705078  522827 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1217 20:29:40.705089  522827 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1217 20:29:40.705094  522827 command_runner.go:130] >       ],
	I1217 20:29:40.705097  522827 command_runner.go:130] >       "size":  "519884",
	I1217 20:29:40.705101  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.705108  522827 command_runner.go:130] >         "value":  "65535"
	I1217 20:29:40.705111  522827 command_runner.go:130] >       },
	I1217 20:29:40.705115  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.705119  522827 command_runner.go:130] >       "pinned":  true
	I1217 20:29:40.705128  522827 command_runner.go:130] >     }
	I1217 20:29:40.705133  522827 command_runner.go:130] >   ]
	I1217 20:29:40.705136  522827 command_runner.go:130] > }
	I1217 20:29:40.705310  522827 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:29:40.705323  522827 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:29:40.705384  522827 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:29:40.728606  522827 command_runner.go:130] > {
	I1217 20:29:40.728624  522827 command_runner.go:130] >   "images":  [
	I1217 20:29:40.728629  522827 command_runner.go:130] >     {
	I1217 20:29:40.728638  522827 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 20:29:40.728643  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728657  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 20:29:40.728665  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728669  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728678  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1217 20:29:40.728686  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1217 20:29:40.728689  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728694  522827 command_runner.go:130] >       "size":  "111333938",
	I1217 20:29:40.728698  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.728705  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728708  522827 command_runner.go:130] >     },
	I1217 20:29:40.728711  522827 command_runner.go:130] >     {
	I1217 20:29:40.728718  522827 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 20:29:40.728726  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728731  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 20:29:40.728735  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728739  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728747  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1217 20:29:40.728756  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 20:29:40.728759  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728763  522827 command_runner.go:130] >       "size":  "29037500",
	I1217 20:29:40.728767  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.728774  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728778  522827 command_runner.go:130] >     },
	I1217 20:29:40.728781  522827 command_runner.go:130] >     {
	I1217 20:29:40.728789  522827 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 20:29:40.728793  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728798  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 20:29:40.728801  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728805  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728813  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1217 20:29:40.728821  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1217 20:29:40.728824  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728829  522827 command_runner.go:130] >       "size":  "74491780",
	I1217 20:29:40.728833  522827 command_runner.go:130] >       "username":  "nonroot",
	I1217 20:29:40.728840  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728843  522827 command_runner.go:130] >     },
	I1217 20:29:40.728846  522827 command_runner.go:130] >     {
	I1217 20:29:40.728853  522827 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1217 20:29:40.728857  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728862  522827 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1217 20:29:40.728866  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728870  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728877  522827 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890",
	I1217 20:29:40.728887  522827 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"
	I1217 20:29:40.728890  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728894  522827 command_runner.go:130] >       "size":  "60850387",
	I1217 20:29:40.728898  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.728902  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.728904  522827 command_runner.go:130] >       },
	I1217 20:29:40.728913  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.728917  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728920  522827 command_runner.go:130] >     },
	I1217 20:29:40.728924  522827 command_runner.go:130] >     {
	I1217 20:29:40.728930  522827 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1217 20:29:40.728934  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728939  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1217 20:29:40.728943  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728946  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728954  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee",
	I1217 20:29:40.728962  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"
	I1217 20:29:40.728965  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728969  522827 command_runner.go:130] >       "size":  "85015535",
	I1217 20:29:40.728972  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.728976  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.728979  522827 command_runner.go:130] >       },
	I1217 20:29:40.728983  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.728986  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728996  522827 command_runner.go:130] >     },
	I1217 20:29:40.728999  522827 command_runner.go:130] >     {
	I1217 20:29:40.729006  522827 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1217 20:29:40.729009  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.729015  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1217 20:29:40.729018  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729022  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.729031  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f",
	I1217 20:29:40.729039  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1217 20:29:40.729042  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729046  522827 command_runner.go:130] >       "size":  "72170325",
	I1217 20:29:40.729049  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.729053  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.729056  522827 command_runner.go:130] >       },
	I1217 20:29:40.729060  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.729064  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.729067  522827 command_runner.go:130] >     },
	I1217 20:29:40.729070  522827 command_runner.go:130] >     {
	I1217 20:29:40.729076  522827 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1217 20:29:40.729081  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.729086  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1217 20:29:40.729089  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729093  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.729100  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f",
	I1217 20:29:40.729108  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1217 20:29:40.729111  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729115  522827 command_runner.go:130] >       "size":  "74107287",
	I1217 20:29:40.729119  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.729123  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.729125  522827 command_runner.go:130] >     },
	I1217 20:29:40.729128  522827 command_runner.go:130] >     {
	I1217 20:29:40.729135  522827 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1217 20:29:40.729138  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.729147  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1217 20:29:40.729150  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729154  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.729163  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3",
	I1217 20:29:40.729180  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"
	I1217 20:29:40.729183  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729187  522827 command_runner.go:130] >       "size":  "49822549",
	I1217 20:29:40.729191  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.729195  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.729198  522827 command_runner.go:130] >       },
	I1217 20:29:40.729202  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.729205  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.729208  522827 command_runner.go:130] >     },
	I1217 20:29:40.729212  522827 command_runner.go:130] >     {
	I1217 20:29:40.729218  522827 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 20:29:40.729221  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.729225  522827 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 20:29:40.729228  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729232  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.729239  522827 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1217 20:29:40.729246  522827 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1217 20:29:40.729249  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729253  522827 command_runner.go:130] >       "size":  "519884",
	I1217 20:29:40.729256  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.729260  522827 command_runner.go:130] >         "value":  "65535"
	I1217 20:29:40.729263  522827 command_runner.go:130] >       },
	I1217 20:29:40.729267  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.729271  522827 command_runner.go:130] >       "pinned":  true
	I1217 20:29:40.729274  522827 command_runner.go:130] >     }
	I1217 20:29:40.729276  522827 command_runner.go:130] >   ]
	I1217 20:29:40.729279  522827 command_runner.go:130] > }
	I1217 20:29:40.730532  522827 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:29:40.730563  522827 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:29:40.730572  522827 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1217 20:29:40.730679  522827 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-655452 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:29:40.730767  522827 ssh_runner.go:195] Run: crio config
	I1217 20:29:40.759067  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.758680307Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1217 20:29:40.759091  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.758877363Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1217 20:29:40.759355  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.759160664Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1217 20:29:40.759513  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.75929148Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1217 20:29:40.759764  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.759610703Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.760178  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.759978034Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1217 20:29:40.781892  522827 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1217 20:29:40.789853  522827 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1217 20:29:40.789886  522827 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1217 20:29:40.789894  522827 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1217 20:29:40.789897  522827 command_runner.go:130] > #
	I1217 20:29:40.789905  522827 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1217 20:29:40.789911  522827 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1217 20:29:40.789918  522827 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1217 20:29:40.789927  522827 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1217 20:29:40.789931  522827 command_runner.go:130] > # reload'.
	I1217 20:29:40.789938  522827 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1217 20:29:40.789949  522827 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1217 20:29:40.789959  522827 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1217 20:29:40.789965  522827 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1217 20:29:40.789972  522827 command_runner.go:130] > [crio]
	I1217 20:29:40.789978  522827 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1217 20:29:40.789983  522827 command_runner.go:130] > # containers images, in this directory.
	I1217 20:29:40.789993  522827 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1217 20:29:40.790003  522827 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1217 20:29:40.790008  522827 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1217 20:29:40.790017  522827 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1217 20:29:40.790024  522827 command_runner.go:130] > # imagestore = ""
	I1217 20:29:40.790038  522827 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1217 20:29:40.790048  522827 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1217 20:29:40.790053  522827 command_runner.go:130] > # storage_driver = "overlay"
	I1217 20:29:40.790058  522827 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1217 20:29:40.790065  522827 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1217 20:29:40.790069  522827 command_runner.go:130] > # storage_option = [
	I1217 20:29:40.790073  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790079  522827 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1217 20:29:40.790092  522827 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1217 20:29:40.790100  522827 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1217 20:29:40.790106  522827 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1217 20:29:40.790112  522827 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1217 20:29:40.790119  522827 command_runner.go:130] > # always happen on a node reboot
	I1217 20:29:40.790124  522827 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1217 20:29:40.790139  522827 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1217 20:29:40.790152  522827 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1217 20:29:40.790158  522827 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1217 20:29:40.790162  522827 command_runner.go:130] > # version_file_persist = ""
	I1217 20:29:40.790170  522827 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1217 20:29:40.790180  522827 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1217 20:29:40.790184  522827 command_runner.go:130] > # internal_wipe = true
	I1217 20:29:40.790193  522827 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1217 20:29:40.790202  522827 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1217 20:29:40.790206  522827 command_runner.go:130] > # internal_repair = true
	I1217 20:29:40.790211  522827 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1217 20:29:40.790219  522827 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1217 20:29:40.790226  522827 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1217 20:29:40.790232  522827 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1217 20:29:40.790241  522827 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1217 20:29:40.790251  522827 command_runner.go:130] > [crio.api]
	I1217 20:29:40.790257  522827 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1217 20:29:40.790262  522827 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1217 20:29:40.790271  522827 command_runner.go:130] > # IP address on which the stream server will listen.
	I1217 20:29:40.790278  522827 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1217 20:29:40.790285  522827 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1217 20:29:40.790290  522827 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1217 20:29:40.790297  522827 command_runner.go:130] > # stream_port = "0"
	I1217 20:29:40.790302  522827 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1217 20:29:40.790307  522827 command_runner.go:130] > # stream_enable_tls = false
	I1217 20:29:40.790313  522827 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1217 20:29:40.790320  522827 command_runner.go:130] > # stream_idle_timeout = ""
	I1217 20:29:40.790330  522827 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1217 20:29:40.790339  522827 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1217 20:29:40.790343  522827 command_runner.go:130] > # stream_tls_cert = ""
	I1217 20:29:40.790349  522827 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1217 20:29:40.790357  522827 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1217 20:29:40.790361  522827 command_runner.go:130] > # stream_tls_key = ""
	I1217 20:29:40.790367  522827 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1217 20:29:40.790377  522827 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1217 20:29:40.790382  522827 command_runner.go:130] > # automatically pick up the changes.
	I1217 20:29:40.790385  522827 command_runner.go:130] > # stream_tls_ca = ""
	I1217 20:29:40.790402  522827 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1217 20:29:40.790415  522827 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1217 20:29:40.790423  522827 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1217 20:29:40.790428  522827 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1217 20:29:40.790437  522827 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1217 20:29:40.790443  522827 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1217 20:29:40.790447  522827 command_runner.go:130] > [crio.runtime]
	I1217 20:29:40.790455  522827 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1217 20:29:40.790465  522827 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1217 20:29:40.790470  522827 command_runner.go:130] > # "nofile=1024:2048"
	I1217 20:29:40.790476  522827 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1217 20:29:40.790480  522827 command_runner.go:130] > # default_ulimits = [
	I1217 20:29:40.790486  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790493  522827 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1217 20:29:40.790499  522827 command_runner.go:130] > # no_pivot = false
	I1217 20:29:40.790505  522827 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1217 20:29:40.790511  522827 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1217 20:29:40.790518  522827 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1217 20:29:40.790525  522827 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1217 20:29:40.790530  522827 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1217 20:29:40.790539  522827 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1217 20:29:40.790543  522827 command_runner.go:130] > # conmon = ""
	I1217 20:29:40.790547  522827 command_runner.go:130] > # Cgroup setting for conmon
	I1217 20:29:40.790558  522827 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1217 20:29:40.790563  522827 command_runner.go:130] > conmon_cgroup = "pod"
	I1217 20:29:40.790572  522827 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1217 20:29:40.790585  522827 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1217 20:29:40.790592  522827 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1217 20:29:40.790603  522827 command_runner.go:130] > # conmon_env = [
	I1217 20:29:40.790606  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790611  522827 command_runner.go:130] > # Additional environment variables to set for all the
	I1217 20:29:40.790621  522827 command_runner.go:130] > # containers. These are overridden if set in the
	I1217 20:29:40.790627  522827 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1217 20:29:40.790631  522827 command_runner.go:130] > # default_env = [
	I1217 20:29:40.790634  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790639  522827 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1217 20:29:40.790647  522827 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1217 20:29:40.790653  522827 command_runner.go:130] > # selinux = false
	I1217 20:29:40.790660  522827 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1217 20:29:40.790675  522827 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1217 20:29:40.790682  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.790691  522827 command_runner.go:130] > # seccomp_profile = ""
	I1217 20:29:40.790698  522827 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1217 20:29:40.790703  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.790707  522827 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1217 20:29:40.790717  522827 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1217 20:29:40.790723  522827 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1217 20:29:40.790730  522827 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1217 20:29:40.790738  522827 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1217 20:29:40.790744  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.790751  522827 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1217 20:29:40.790757  522827 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1217 20:29:40.790761  522827 command_runner.go:130] > # the cgroup blockio controller.
	I1217 20:29:40.790765  522827 command_runner.go:130] > # blockio_config_file = ""
	I1217 20:29:40.790774  522827 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1217 20:29:40.790780  522827 command_runner.go:130] > # blockio parameters.
	I1217 20:29:40.790790  522827 command_runner.go:130] > # blockio_reload = false
	I1217 20:29:40.790796  522827 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1217 20:29:40.790800  522827 command_runner.go:130] > # irqbalance daemon.
	I1217 20:29:40.790805  522827 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1217 20:29:40.790814  522827 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1217 20:29:40.790828  522827 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1217 20:29:40.790836  522827 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1217 20:29:40.790845  522827 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1217 20:29:40.790852  522827 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1217 20:29:40.790859  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.790863  522827 command_runner.go:130] > # rdt_config_file = ""
	I1217 20:29:40.790869  522827 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1217 20:29:40.790873  522827 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1217 20:29:40.790881  522827 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1217 20:29:40.790885  522827 command_runner.go:130] > # separate_pull_cgroup = ""
	I1217 20:29:40.790892  522827 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1217 20:29:40.790900  522827 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1217 20:29:40.790904  522827 command_runner.go:130] > # will be added.
	I1217 20:29:40.790908  522827 command_runner.go:130] > # default_capabilities = [
	I1217 20:29:40.790920  522827 command_runner.go:130] > # 	"CHOWN",
	I1217 20:29:40.790924  522827 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1217 20:29:40.790927  522827 command_runner.go:130] > # 	"FSETID",
	I1217 20:29:40.790930  522827 command_runner.go:130] > # 	"FOWNER",
	I1217 20:29:40.790940  522827 command_runner.go:130] > # 	"SETGID",
	I1217 20:29:40.790944  522827 command_runner.go:130] > # 	"SETUID",
	I1217 20:29:40.790963  522827 command_runner.go:130] > # 	"SETPCAP",
	I1217 20:29:40.790971  522827 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1217 20:29:40.790975  522827 command_runner.go:130] > # 	"KILL",
	I1217 20:29:40.790977  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790985  522827 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1217 20:29:40.790992  522827 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1217 20:29:40.790999  522827 command_runner.go:130] > # add_inheritable_capabilities = false
	I1217 20:29:40.791005  522827 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1217 20:29:40.791018  522827 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1217 20:29:40.791023  522827 command_runner.go:130] > default_sysctls = [
	I1217 20:29:40.791030  522827 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1217 20:29:40.791033  522827 command_runner.go:130] > ]
	I1217 20:29:40.791038  522827 command_runner.go:130] > # List of devices on the host that a
	I1217 20:29:40.791044  522827 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1217 20:29:40.791048  522827 command_runner.go:130] > # allowed_devices = [
	I1217 20:29:40.791055  522827 command_runner.go:130] > # 	"/dev/fuse",
	I1217 20:29:40.791059  522827 command_runner.go:130] > # 	"/dev/net/tun",
	I1217 20:29:40.791062  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791067  522827 command_runner.go:130] > # List of additional devices. specified as
	I1217 20:29:40.791081  522827 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1217 20:29:40.791088  522827 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1217 20:29:40.791096  522827 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1217 20:29:40.791103  522827 command_runner.go:130] > # additional_devices = [
	I1217 20:29:40.791110  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791115  522827 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1217 20:29:40.791119  522827 command_runner.go:130] > # cdi_spec_dirs = [
	I1217 20:29:40.791122  522827 command_runner.go:130] > # 	"/etc/cdi",
	I1217 20:29:40.791126  522827 command_runner.go:130] > # 	"/var/run/cdi",
	I1217 20:29:40.791130  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791136  522827 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1217 20:29:40.791144  522827 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1217 20:29:40.791149  522827 command_runner.go:130] > # Defaults to false.
	I1217 20:29:40.791156  522827 command_runner.go:130] > # device_ownership_from_security_context = false
	I1217 20:29:40.791164  522827 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1217 20:29:40.791178  522827 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1217 20:29:40.791181  522827 command_runner.go:130] > # hooks_dir = [
	I1217 20:29:40.791186  522827 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1217 20:29:40.791189  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791195  522827 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1217 20:29:40.791205  522827 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1217 20:29:40.791210  522827 command_runner.go:130] > # its default mounts from the following two files:
	I1217 20:29:40.791220  522827 command_runner.go:130] > #
	I1217 20:29:40.791229  522827 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1217 20:29:40.791240  522827 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1217 20:29:40.791248  522827 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1217 20:29:40.791251  522827 command_runner.go:130] > #
	I1217 20:29:40.791257  522827 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1217 20:29:40.791274  522827 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1217 20:29:40.791280  522827 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1217 20:29:40.791285  522827 command_runner.go:130] > #      only add mounts it finds in this file.
	I1217 20:29:40.791288  522827 command_runner.go:130] > #
	I1217 20:29:40.791292  522827 command_runner.go:130] > # default_mounts_file = ""
	I1217 20:29:40.791301  522827 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1217 20:29:40.791316  522827 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1217 20:29:40.791320  522827 command_runner.go:130] > # pids_limit = -1
	I1217 20:29:40.791326  522827 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1217 20:29:40.791335  522827 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1217 20:29:40.791343  522827 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1217 20:29:40.791354  522827 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1217 20:29:40.791357  522827 command_runner.go:130] > # log_size_max = -1
	I1217 20:29:40.791364  522827 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1217 20:29:40.791368  522827 command_runner.go:130] > # log_to_journald = false
	I1217 20:29:40.791374  522827 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1217 20:29:40.791383  522827 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1217 20:29:40.791391  522827 command_runner.go:130] > # Path to directory for container attach sockets.
	I1217 20:29:40.791396  522827 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1217 20:29:40.791401  522827 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1217 20:29:40.791405  522827 command_runner.go:130] > # bind_mount_prefix = ""
	I1217 20:29:40.791417  522827 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1217 20:29:40.791421  522827 command_runner.go:130] > # read_only = false
	I1217 20:29:40.791427  522827 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1217 20:29:40.791437  522827 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1217 20:29:40.791441  522827 command_runner.go:130] > # live configuration reload.
	I1217 20:29:40.791445  522827 command_runner.go:130] > # log_level = "info"
	I1217 20:29:40.791454  522827 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1217 20:29:40.791460  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.791466  522827 command_runner.go:130] > # log_filter = ""
	I1217 20:29:40.791472  522827 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1217 20:29:40.791481  522827 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1217 20:29:40.791485  522827 command_runner.go:130] > # separated by comma.
	I1217 20:29:40.791493  522827 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 20:29:40.791497  522827 command_runner.go:130] > # uid_mappings = ""
	I1217 20:29:40.791506  522827 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1217 20:29:40.791518  522827 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1217 20:29:40.791523  522827 command_runner.go:130] > # separated by comma.
	I1217 20:29:40.791530  522827 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 20:29:40.791535  522827 command_runner.go:130] > # gid_mappings = ""
	I1217 20:29:40.791540  522827 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1217 20:29:40.791549  522827 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1217 20:29:40.791556  522827 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1217 20:29:40.791565  522827 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 20:29:40.791572  522827 command_runner.go:130] > # minimum_mappable_uid = -1
	I1217 20:29:40.791604  522827 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1217 20:29:40.791611  522827 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1217 20:29:40.791617  522827 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1217 20:29:40.791627  522827 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 20:29:40.791634  522827 command_runner.go:130] > # minimum_mappable_gid = -1
	I1217 20:29:40.791640  522827 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1217 20:29:40.791648  522827 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1217 20:29:40.791662  522827 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1217 20:29:40.791666  522827 command_runner.go:130] > # ctr_stop_timeout = 30
	I1217 20:29:40.791672  522827 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1217 20:29:40.791680  522827 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1217 20:29:40.791685  522827 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1217 20:29:40.791690  522827 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1217 20:29:40.791694  522827 command_runner.go:130] > # drop_infra_ctr = true
	I1217 20:29:40.791700  522827 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1217 20:29:40.791712  522827 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1217 20:29:40.791723  522827 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1217 20:29:40.791727  522827 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1217 20:29:40.791734  522827 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1217 20:29:40.791743  522827 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1217 20:29:40.791749  522827 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1217 20:29:40.791756  522827 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1217 20:29:40.791760  522827 command_runner.go:130] > # shared_cpuset = ""
	I1217 20:29:40.791766  522827 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1217 20:29:40.791773  522827 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1217 20:29:40.791777  522827 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1217 20:29:40.791784  522827 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1217 20:29:40.791795  522827 command_runner.go:130] > # pinns_path = ""
	I1217 20:29:40.791801  522827 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1217 20:29:40.791807  522827 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1217 20:29:40.791814  522827 command_runner.go:130] > # enable_criu_support = true
	I1217 20:29:40.791819  522827 command_runner.go:130] > # Enable/disable the generation of the container,
	I1217 20:29:40.791826  522827 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1217 20:29:40.791833  522827 command_runner.go:130] > # enable_pod_events = false
	I1217 20:29:40.791839  522827 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1217 20:29:40.791845  522827 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1217 20:29:40.791849  522827 command_runner.go:130] > # default_runtime = "crun"
	I1217 20:29:40.791857  522827 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1217 20:29:40.791865  522827 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1217 20:29:40.791874  522827 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1217 20:29:40.791887  522827 command_runner.go:130] > # creation as a file is not desired either.
	I1217 20:29:40.791896  522827 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1217 20:29:40.791903  522827 command_runner.go:130] > # the hostname is being managed dynamically.
	I1217 20:29:40.791910  522827 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1217 20:29:40.791914  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791920  522827 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1217 20:29:40.791929  522827 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1217 20:29:40.791935  522827 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1217 20:29:40.791943  522827 command_runner.go:130] > # Each entry in the table should follow the format:
	I1217 20:29:40.791946  522827 command_runner.go:130] > #
	I1217 20:29:40.791951  522827 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1217 20:29:40.791958  522827 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1217 20:29:40.791964  522827 command_runner.go:130] > # runtime_type = "oci"
	I1217 20:29:40.791969  522827 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1217 20:29:40.791976  522827 command_runner.go:130] > # inherit_default_runtime = false
	I1217 20:29:40.791981  522827 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1217 20:29:40.791986  522827 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1217 20:29:40.791990  522827 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1217 20:29:40.791996  522827 command_runner.go:130] > # monitor_env = []
	I1217 20:29:40.792001  522827 command_runner.go:130] > # privileged_without_host_devices = false
	I1217 20:29:40.792008  522827 command_runner.go:130] > # allowed_annotations = []
	I1217 20:29:40.792014  522827 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1217 20:29:40.792017  522827 command_runner.go:130] > # no_sync_log = false
	I1217 20:29:40.792021  522827 command_runner.go:130] > # default_annotations = {}
	I1217 20:29:40.792028  522827 command_runner.go:130] > # stream_websockets = false
	I1217 20:29:40.792034  522827 command_runner.go:130] > # seccomp_profile = ""
	I1217 20:29:40.792066  522827 command_runner.go:130] > # Where:
	I1217 20:29:40.792076  522827 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1217 20:29:40.792083  522827 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1217 20:29:40.792090  522827 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1217 20:29:40.792098  522827 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1217 20:29:40.792102  522827 command_runner.go:130] > #   in $PATH.
	I1217 20:29:40.792108  522827 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1217 20:29:40.792113  522827 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1217 20:29:40.792122  522827 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1217 20:29:40.792128  522827 command_runner.go:130] > #   state.
	I1217 20:29:40.792134  522827 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1217 20:29:40.792143  522827 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1217 20:29:40.792149  522827 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1217 20:29:40.792155  522827 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1217 20:29:40.792163  522827 command_runner.go:130] > #   the values from the default runtime on load time.
	I1217 20:29:40.792174  522827 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1217 20:29:40.792183  522827 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1217 20:29:40.792190  522827 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1217 20:29:40.792199  522827 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1217 20:29:40.792207  522827 command_runner.go:130] > #   The currently recognized values are:
	I1217 20:29:40.792214  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1217 20:29:40.792222  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1217 20:29:40.792231  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1217 20:29:40.792237  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1217 20:29:40.792251  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1217 20:29:40.792260  522827 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1217 20:29:40.792270  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1217 20:29:40.792277  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1217 20:29:40.792284  522827 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1217 20:29:40.792293  522827 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1217 20:29:40.792309  522827 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1217 20:29:40.792316  522827 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1217 20:29:40.792322  522827 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1217 20:29:40.792331  522827 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1217 20:29:40.792337  522827 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1217 20:29:40.792345  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1217 20:29:40.792353  522827 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1217 20:29:40.792358  522827 command_runner.go:130] > #   deprecated option "conmon".
	I1217 20:29:40.792367  522827 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1217 20:29:40.792380  522827 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1217 20:29:40.792387  522827 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1217 20:29:40.792392  522827 command_runner.go:130] > #   should be moved to the container's cgroup
	I1217 20:29:40.792405  522827 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1217 20:29:40.792410  522827 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1217 20:29:40.792420  522827 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1217 20:29:40.792424  522827 command_runner.go:130] > #   conmon-rs by using:
	I1217 20:29:40.792432  522827 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1217 20:29:40.792441  522827 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1217 20:29:40.792454  522827 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1217 20:29:40.792465  522827 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1217 20:29:40.792471  522827 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1217 20:29:40.792485  522827 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1217 20:29:40.792497  522827 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1217 20:29:40.792506  522827 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1217 20:29:40.792515  522827 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1217 20:29:40.792524  522827 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1217 20:29:40.792529  522827 command_runner.go:130] > #   when a machine crash happens.
	I1217 20:29:40.792536  522827 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1217 20:29:40.792546  522827 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1217 20:29:40.792558  522827 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1217 20:29:40.792562  522827 command_runner.go:130] > #   seccomp profile for the runtime.
	I1217 20:29:40.792568  522827 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1217 20:29:40.792579  522827 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1217 20:29:40.792582  522827 command_runner.go:130] > #
	I1217 20:29:40.792587  522827 command_runner.go:130] > # Using the seccomp notifier feature:
	I1217 20:29:40.792590  522827 command_runner.go:130] > #
	I1217 20:29:40.792596  522827 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1217 20:29:40.792605  522827 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1217 20:29:40.792608  522827 command_runner.go:130] > #
	I1217 20:29:40.792615  522827 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1217 20:29:40.792630  522827 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1217 20:29:40.792633  522827 command_runner.go:130] > #
	I1217 20:29:40.792642  522827 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1217 20:29:40.792649  522827 command_runner.go:130] > # feature.
	I1217 20:29:40.792652  522827 command_runner.go:130] > #
	I1217 20:29:40.792658  522827 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1217 20:29:40.792667  522827 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1217 20:29:40.792673  522827 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1217 20:29:40.792679  522827 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1217 20:29:40.792688  522827 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1217 20:29:40.792692  522827 command_runner.go:130] > #
	I1217 20:29:40.792702  522827 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1217 20:29:40.792711  522827 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1217 20:29:40.792715  522827 command_runner.go:130] > #
	I1217 20:29:40.792721  522827 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1217 20:29:40.792727  522827 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1217 20:29:40.792732  522827 command_runner.go:130] > #
	I1217 20:29:40.792738  522827 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1217 20:29:40.792744  522827 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1217 20:29:40.792750  522827 command_runner.go:130] > # limitation.
	I1217 20:29:40.792754  522827 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1217 20:29:40.792758  522827 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1217 20:29:40.792761  522827 command_runner.go:130] > runtime_type = ""
	I1217 20:29:40.792765  522827 command_runner.go:130] > runtime_root = "/run/crun"
	I1217 20:29:40.792769  522827 command_runner.go:130] > inherit_default_runtime = false
	I1217 20:29:40.792774  522827 command_runner.go:130] > runtime_config_path = ""
	I1217 20:29:40.792781  522827 command_runner.go:130] > container_min_memory = ""
	I1217 20:29:40.792785  522827 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1217 20:29:40.792796  522827 command_runner.go:130] > monitor_cgroup = "pod"
	I1217 20:29:40.792801  522827 command_runner.go:130] > monitor_exec_cgroup = ""
	I1217 20:29:40.792804  522827 command_runner.go:130] > allowed_annotations = [
	I1217 20:29:40.792809  522827 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1217 20:29:40.792814  522827 command_runner.go:130] > ]
	I1217 20:29:40.792819  522827 command_runner.go:130] > privileged_without_host_devices = false
	I1217 20:29:40.792823  522827 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1217 20:29:40.792828  522827 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1217 20:29:40.792834  522827 command_runner.go:130] > runtime_type = ""
	I1217 20:29:40.792839  522827 command_runner.go:130] > runtime_root = "/run/runc"
	I1217 20:29:40.792842  522827 command_runner.go:130] > inherit_default_runtime = false
	I1217 20:29:40.792846  522827 command_runner.go:130] > runtime_config_path = ""
	I1217 20:29:40.792850  522827 command_runner.go:130] > container_min_memory = ""
	I1217 20:29:40.792856  522827 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1217 20:29:40.792860  522827 command_runner.go:130] > monitor_cgroup = "pod"
	I1217 20:29:40.792864  522827 command_runner.go:130] > monitor_exec_cgroup = ""
	I1217 20:29:40.792875  522827 command_runner.go:130] > privileged_without_host_devices = false
	I1217 20:29:40.792884  522827 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1217 20:29:40.792890  522827 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1217 20:29:40.792896  522827 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1217 20:29:40.792907  522827 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1217 20:29:40.792918  522827 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1217 20:29:40.792930  522827 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1217 20:29:40.792940  522827 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1217 20:29:40.792947  522827 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1217 20:29:40.792958  522827 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1217 20:29:40.792975  522827 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1217 20:29:40.792980  522827 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1217 20:29:40.792998  522827 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1217 20:29:40.793004  522827 command_runner.go:130] > # Example:
	I1217 20:29:40.793009  522827 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1217 20:29:40.793014  522827 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1217 20:29:40.793019  522827 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1217 20:29:40.793025  522827 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1217 20:29:40.793029  522827 command_runner.go:130] > # cpuset = "0-1"
	I1217 20:29:40.793033  522827 command_runner.go:130] > # cpushares = "5"
	I1217 20:29:40.793039  522827 command_runner.go:130] > # cpuquota = "1000"
	I1217 20:29:40.793043  522827 command_runner.go:130] > # cpuperiod = "100000"
	I1217 20:29:40.793050  522827 command_runner.go:130] > # cpulimit = "35"
	I1217 20:29:40.793059  522827 command_runner.go:130] > # Where:
	I1217 20:29:40.793066  522827 command_runner.go:130] > # The workload name is workload-type.
	I1217 20:29:40.793073  522827 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1217 20:29:40.793079  522827 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1217 20:29:40.793087  522827 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1217 20:29:40.793096  522827 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1217 20:29:40.793101  522827 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1217 20:29:40.793106  522827 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1217 20:29:40.793116  522827 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1217 20:29:40.793122  522827 command_runner.go:130] > # Default value is set to true
	I1217 20:29:40.793132  522827 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1217 20:29:40.793141  522827 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1217 20:29:40.793146  522827 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1217 20:29:40.793150  522827 command_runner.go:130] > # Default value is set to 'false'
	I1217 20:29:40.793155  522827 command_runner.go:130] > # disable_hostport_mapping = false
	I1217 20:29:40.793163  522827 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1217 20:29:40.793172  522827 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1217 20:29:40.793175  522827 command_runner.go:130] > # timezone = ""
	I1217 20:29:40.793185  522827 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1217 20:29:40.793188  522827 command_runner.go:130] > #
	I1217 20:29:40.793194  522827 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1217 20:29:40.793212  522827 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1217 20:29:40.793215  522827 command_runner.go:130] > [crio.image]
	I1217 20:29:40.793222  522827 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1217 20:29:40.793229  522827 command_runner.go:130] > # default_transport = "docker://"
	I1217 20:29:40.793236  522827 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1217 20:29:40.793243  522827 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1217 20:29:40.793249  522827 command_runner.go:130] > # global_auth_file = ""
	I1217 20:29:40.793255  522827 command_runner.go:130] > # The image used to instantiate infra containers.
	I1217 20:29:40.793260  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.793264  522827 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1217 20:29:40.793271  522827 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1217 20:29:40.793277  522827 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1217 20:29:40.793283  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.793289  522827 command_runner.go:130] > # pause_image_auth_file = ""
	I1217 20:29:40.793295  522827 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1217 20:29:40.793304  522827 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1217 20:29:40.793311  522827 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1217 20:29:40.793317  522827 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1217 20:29:40.793323  522827 command_runner.go:130] > # pause_command = "/pause"
	I1217 20:29:40.793329  522827 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1217 20:29:40.793335  522827 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1217 20:29:40.793342  522827 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1217 20:29:40.793351  522827 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1217 20:29:40.793357  522827 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1217 20:29:40.793372  522827 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1217 20:29:40.793376  522827 command_runner.go:130] > # pinned_images = [
	I1217 20:29:40.793379  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793388  522827 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1217 20:29:40.793401  522827 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1217 20:29:40.793408  522827 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1217 20:29:40.793416  522827 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1217 20:29:40.793422  522827 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1217 20:29:40.793426  522827 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1217 20:29:40.793432  522827 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1217 20:29:40.793439  522827 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1217 20:29:40.793445  522827 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1217 20:29:40.793456  522827 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1217 20:29:40.793462  522827 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1217 20:29:40.793467  522827 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1217 20:29:40.793473  522827 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1217 20:29:40.793479  522827 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1217 20:29:40.793483  522827 command_runner.go:130] > # changing them here.
	I1217 20:29:40.793488  522827 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1217 20:29:40.793492  522827 command_runner.go:130] > # insecure_registries = [
	I1217 20:29:40.793495  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793514  522827 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1217 20:29:40.793522  522827 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1217 20:29:40.793526  522827 command_runner.go:130] > # image_volumes = "mkdir"
	I1217 20:29:40.793532  522827 command_runner.go:130] > # Temporary directory to use for storing big files
	I1217 20:29:40.793538  522827 command_runner.go:130] > # big_files_temporary_dir = ""
	I1217 20:29:40.793544  522827 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1217 20:29:40.793554  522827 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1217 20:29:40.793558  522827 command_runner.go:130] > # auto_reload_registries = false
	I1217 20:29:40.793564  522827 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1217 20:29:40.793572  522827 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1217 20:29:40.793584  522827 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1217 20:29:40.793589  522827 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1217 20:29:40.793594  522827 command_runner.go:130] > # The mode of short name resolution.
	I1217 20:29:40.793600  522827 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1217 20:29:40.793607  522827 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1217 20:29:40.793613  522827 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1217 20:29:40.793624  522827 command_runner.go:130] > # short_name_mode = "enforcing"
	I1217 20:29:40.793631  522827 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1217 20:29:40.793636  522827 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1217 20:29:40.793643  522827 command_runner.go:130] > # oci_artifact_mount_support = true
	I1217 20:29:40.793649  522827 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1217 20:29:40.793653  522827 command_runner.go:130] > # CNI plugins.
	I1217 20:29:40.793662  522827 command_runner.go:130] > [crio.network]
	I1217 20:29:40.793669  522827 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1217 20:29:40.793674  522827 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1217 20:29:40.793678  522827 command_runner.go:130] > # cni_default_network = ""
	I1217 20:29:40.793683  522827 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1217 20:29:40.793688  522827 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1217 20:29:40.793695  522827 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1217 20:29:40.793701  522827 command_runner.go:130] > # plugin_dirs = [
	I1217 20:29:40.793705  522827 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1217 20:29:40.793708  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793712  522827 command_runner.go:130] > # List of included pod metrics.
	I1217 20:29:40.793716  522827 command_runner.go:130] > # included_pod_metrics = [
	I1217 20:29:40.793721  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793727  522827 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1217 20:29:40.793733  522827 command_runner.go:130] > [crio.metrics]
	I1217 20:29:40.793738  522827 command_runner.go:130] > # Globally enable or disable metrics support.
	I1217 20:29:40.793742  522827 command_runner.go:130] > # enable_metrics = false
	I1217 20:29:40.793749  522827 command_runner.go:130] > # Specify enabled metrics collectors.
	I1217 20:29:40.793754  522827 command_runner.go:130] > # Per default all metrics are enabled.
	I1217 20:29:40.793760  522827 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1217 20:29:40.793769  522827 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1217 20:29:40.793781  522827 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1217 20:29:40.793788  522827 command_runner.go:130] > # metrics_collectors = [
	I1217 20:29:40.793792  522827 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1217 20:29:40.793796  522827 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1217 20:29:40.793801  522827 command_runner.go:130] > # 	"containers_oom_total",
	I1217 20:29:40.793810  522827 command_runner.go:130] > # 	"processes_defunct",
	I1217 20:29:40.793814  522827 command_runner.go:130] > # 	"operations_total",
	I1217 20:29:40.793818  522827 command_runner.go:130] > # 	"operations_latency_seconds",
	I1217 20:29:40.793825  522827 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1217 20:29:40.793830  522827 command_runner.go:130] > # 	"operations_errors_total",
	I1217 20:29:40.793834  522827 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1217 20:29:40.793838  522827 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1217 20:29:40.793843  522827 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1217 20:29:40.793847  522827 command_runner.go:130] > # 	"image_pulls_success_total",
	I1217 20:29:40.793851  522827 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1217 20:29:40.793857  522827 command_runner.go:130] > # 	"containers_oom_count_total",
	I1217 20:29:40.793862  522827 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1217 20:29:40.793869  522827 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1217 20:29:40.793873  522827 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1217 20:29:40.793876  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793882  522827 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1217 20:29:40.793888  522827 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1217 20:29:40.793894  522827 command_runner.go:130] > # The port on which the metrics server will listen.
	I1217 20:29:40.793898  522827 command_runner.go:130] > # metrics_port = 9090
	I1217 20:29:40.793905  522827 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1217 20:29:40.793909  522827 command_runner.go:130] > # metrics_socket = ""
	I1217 20:29:40.793920  522827 command_runner.go:130] > # The certificate for the secure metrics server.
	I1217 20:29:40.793926  522827 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1217 20:29:40.793932  522827 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1217 20:29:40.793939  522827 command_runner.go:130] > # certificate on any modification event.
	I1217 20:29:40.793942  522827 command_runner.go:130] > # metrics_cert = ""
	I1217 20:29:40.793947  522827 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1217 20:29:40.793959  522827 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1217 20:29:40.793967  522827 command_runner.go:130] > # metrics_key = ""
	I1217 20:29:40.793980  522827 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1217 20:29:40.793983  522827 command_runner.go:130] > [crio.tracing]
	I1217 20:29:40.793989  522827 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1217 20:29:40.793996  522827 command_runner.go:130] > # enable_tracing = false
	I1217 20:29:40.794002  522827 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1217 20:29:40.794006  522827 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1217 20:29:40.794015  522827 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1217 20:29:40.794020  522827 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1217 20:29:40.794024  522827 command_runner.go:130] > # CRI-O NRI configuration.
	I1217 20:29:40.794027  522827 command_runner.go:130] > [crio.nri]
	I1217 20:29:40.794031  522827 command_runner.go:130] > # Globally enable or disable NRI.
	I1217 20:29:40.794035  522827 command_runner.go:130] > # enable_nri = true
	I1217 20:29:40.794039  522827 command_runner.go:130] > # NRI socket to listen on.
	I1217 20:29:40.794045  522827 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1217 20:29:40.794050  522827 command_runner.go:130] > # NRI plugin directory to use.
	I1217 20:29:40.794061  522827 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1217 20:29:40.794066  522827 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1217 20:29:40.794073  522827 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1217 20:29:40.794082  522827 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1217 20:29:40.794150  522827 command_runner.go:130] > # nri_disable_connections = false
	I1217 20:29:40.794172  522827 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1217 20:29:40.794178  522827 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1217 20:29:40.794186  522827 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1217 20:29:40.794191  522827 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1217 20:29:40.794200  522827 command_runner.go:130] > # NRI default validator configuration.
	I1217 20:29:40.794211  522827 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1217 20:29:40.794218  522827 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1217 20:29:40.794225  522827 command_runner.go:130] > # can be restricted/rejected:
	I1217 20:29:40.794229  522827 command_runner.go:130] > # - OCI hook injection
	I1217 20:29:40.794235  522827 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1217 20:29:40.794240  522827 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1217 20:29:40.794245  522827 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1217 20:29:40.794252  522827 command_runner.go:130] > # - adjustment of linux namespaces
	I1217 20:29:40.794263  522827 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1217 20:29:40.794277  522827 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1217 20:29:40.794284  522827 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1217 20:29:40.794295  522827 command_runner.go:130] > #
	I1217 20:29:40.794299  522827 command_runner.go:130] > # [crio.nri.default_validator]
	I1217 20:29:40.794304  522827 command_runner.go:130] > # nri_enable_default_validator = false
	I1217 20:29:40.794312  522827 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1217 20:29:40.794318  522827 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1217 20:29:40.794326  522827 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1217 20:29:40.794338  522827 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1217 20:29:40.794343  522827 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1217 20:29:40.794347  522827 command_runner.go:130] > # nri_validator_required_plugins = [
	I1217 20:29:40.794352  522827 command_runner.go:130] > # ]
	I1217 20:29:40.794359  522827 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1217 20:29:40.794368  522827 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1217 20:29:40.794373  522827 command_runner.go:130] > [crio.stats]
	I1217 20:29:40.794386  522827 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1217 20:29:40.794392  522827 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1217 20:29:40.794398  522827 command_runner.go:130] > # stats_collection_period = 0
	I1217 20:29:40.794405  522827 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1217 20:29:40.794411  522827 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1217 20:29:40.794417  522827 command_runner.go:130] > # collection_period = 0
	I1217 20:29:40.794552  522827 cni.go:84] Creating CNI manager for ""
	I1217 20:29:40.794571  522827 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:29:40.794583  522827 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:29:40.794609  522827 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-655452 NodeName:functional-655452 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:29:40.794745  522827 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-655452"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:29:40.794827  522827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 20:29:40.802768  522827 command_runner.go:130] > kubeadm
	I1217 20:29:40.802789  522827 command_runner.go:130] > kubectl
	I1217 20:29:40.802794  522827 command_runner.go:130] > kubelet
	I1217 20:29:40.802809  522827 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:29:40.802895  522827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:29:40.810641  522827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 20:29:40.826893  522827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 20:29:40.841576  522827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1217 20:29:40.856014  522827 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:29:40.859640  522827 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 20:29:40.860204  522827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:29:40.970449  522827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:29:41.821239  522827 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452 for IP: 192.168.49.2
	I1217 20:29:41.821266  522827 certs.go:195] generating shared ca certs ...
	I1217 20:29:41.821284  522827 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:29:41.821441  522827 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:29:41.821492  522827 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:29:41.821509  522827 certs.go:257] generating profile certs ...
	I1217 20:29:41.821619  522827 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key
	I1217 20:29:41.821682  522827 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key.aa95dda5
	I1217 20:29:41.821733  522827 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key
	I1217 20:29:41.821747  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 20:29:41.821765  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 20:29:41.821780  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 20:29:41.821791  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 20:29:41.821805  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 20:29:41.821817  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 20:29:41.821831  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 20:29:41.821846  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 20:29:41.821894  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:29:41.821945  522827 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:29:41.821959  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:29:41.821996  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:29:41.822031  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:29:41.822058  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:29:41.822104  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:29:41.822138  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:41.822159  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:29:41.822175  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:29:41.822802  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:29:41.845035  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:29:41.868336  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:29:41.901049  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:29:41.918871  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 20:29:41.937168  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 20:29:41.954450  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:29:41.971684  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:29:41.988884  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:29:42.008645  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:29:42.029398  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:29:42.047332  522827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:29:42.061588  522827 ssh_runner.go:195] Run: openssl version
	I1217 20:29:42.068928  522827 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 20:29:42.069476  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.078814  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:29:42.088990  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.093920  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.093987  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.094097  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.137804  522827 command_runner.go:130] > 51391683
	I1217 20:29:42.138358  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:29:42.147537  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.157061  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:29:42.166751  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.171759  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.171865  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.172010  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.222515  522827 command_runner.go:130] > 3ec20f2e
	I1217 20:29:42.222600  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:29:42.231935  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.242232  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:29:42.250913  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.255543  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.255609  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.255686  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.298361  522827 command_runner.go:130] > b5213941
	I1217 20:29:42.298457  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:29:42.307141  522827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:29:42.311232  522827 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:29:42.311338  522827 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 20:29:42.311364  522827 command_runner.go:130] > Device: 259,1	Inode: 1313050     Links: 1
	I1217 20:29:42.311390  522827 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 20:29:42.311425  522827 command_runner.go:130] > Access: 2025-12-17 20:25:34.088053460 +0000
	I1217 20:29:42.311446  522827 command_runner.go:130] > Modify: 2025-12-17 20:21:29.777917427 +0000
	I1217 20:29:42.311461  522827 command_runner.go:130] > Change: 2025-12-17 20:21:29.777917427 +0000
	I1217 20:29:42.311467  522827 command_runner.go:130] >  Birth: 2025-12-17 20:21:29.777917427 +0000
	I1217 20:29:42.311555  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:29:42.352885  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.353302  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:29:42.407045  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.407143  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:29:42.455863  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.456326  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:29:42.505636  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.506227  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:29:42.548331  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.548862  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:29:42.590705  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.591277  522827 kubeadm.go:401] StartCluster: {Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:29:42.591354  522827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:29:42.591425  522827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:29:42.618986  522827 cri.go:89] found id: ""
	I1217 20:29:42.619059  522827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:29:42.626323  522827 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 20:29:42.626347  522827 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 20:29:42.626355  522827 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 20:29:42.627403  522827 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 20:29:42.627425  522827 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 20:29:42.627476  522827 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 20:29:42.635033  522827 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:29:42.635439  522827 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-655452" does not appear in /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:42.635552  522827 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-485134/kubeconfig needs updating (will repair): [kubeconfig missing "functional-655452" cluster setting kubeconfig missing "functional-655452" context setting]
	I1217 20:29:42.635844  522827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/kubeconfig: {Name:mkc7214551fa855d6d4575d627c3ecd54291b351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:29:42.636278  522827 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:42.636437  522827 kapi.go:59] client config for functional-655452: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:29:42.636955  522827 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 20:29:42.636974  522827 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 20:29:42.636979  522827 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 20:29:42.636984  522827 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 20:29:42.636988  522827 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 20:29:42.637054  522827 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 20:29:42.637345  522827 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 20:29:42.646583  522827 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 20:29:42.646685  522827 kubeadm.go:602] duration metric: took 19.253149ms to restartPrimaryControlPlane
	I1217 20:29:42.646744  522827 kubeadm.go:403] duration metric: took 55.459532ms to StartCluster
	I1217 20:29:42.646789  522827 settings.go:142] acquiring lock: {Name:mk91fac3a58d91b836cd0701183ef7cc1e571672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:29:42.646894  522827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:42.647795  522827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/kubeconfig: {Name:mkc7214551fa855d6d4575d627c3ecd54291b351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:29:42.648137  522827 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:29:42.648371  522827 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:29:42.648423  522827 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:29:42.648485  522827 addons.go:70] Setting storage-provisioner=true in profile "functional-655452"
	I1217 20:29:42.648497  522827 addons.go:239] Setting addon storage-provisioner=true in "functional-655452"
	I1217 20:29:42.648521  522827 host.go:66] Checking if "functional-655452" exists ...
	I1217 20:29:42.648902  522827 addons.go:70] Setting default-storageclass=true in profile "functional-655452"
	I1217 20:29:42.648999  522827 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-655452"
	I1217 20:29:42.649042  522827 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:29:42.649424  522827 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:29:42.653921  522827 out.go:179] * Verifying Kubernetes components...
	I1217 20:29:42.656821  522827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:29:42.689834  522827 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:42.690004  522827 kapi.go:59] client config for functional-655452: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:29:42.690276  522827 addons.go:239] Setting addon default-storageclass=true in "functional-655452"
	I1217 20:29:42.690305  522827 host.go:66] Checking if "functional-655452" exists ...
	I1217 20:29:42.690860  522827 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:29:42.692598  522827 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 20:29:42.699772  522827 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:42.699803  522827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 20:29:42.699871  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:42.735975  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:42.743517  522827 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:42.743543  522827 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:29:42.743664  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:42.778325  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:42.848025  522827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:29:42.860324  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:42.899199  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:43.321927  522827 node_ready.go:35] waiting up to 6m0s for node "functional-655452" to be "Ready" ...
	I1217 20:29:43.322118  522827 type.go:168] "Request Body" body=""
	I1217 20:29:43.322203  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:43.322465  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.322528  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.322567  522827 retry.go:31] will retry after 172.422642ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.322648  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.322689  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.322715  522827 retry.go:31] will retry after 167.097093ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.322809  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:43.490380  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:43.496229  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:43.581353  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.581433  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.581460  522827 retry.go:31] will retry after 331.036154ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.581553  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.581605  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.581639  522827 retry.go:31] will retry after 400.38477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.822877  522827 type.go:168] "Request Body" body=""
	I1217 20:29:43.822949  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:43.823300  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:43.912722  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:43.970874  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.974629  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.974708  522827 retry.go:31] will retry after 462.319516ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.982922  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:44.044566  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:44.048683  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.048723  522827 retry.go:31] will retry after 443.115947ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.323122  522827 type.go:168] "Request Body" body=""
	I1217 20:29:44.323200  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:44.323555  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:44.437879  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:44.492501  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:44.499443  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:44.499482  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.499520  522827 retry.go:31] will retry after 1.265386144s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.551004  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:44.551045  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.551085  522827 retry.go:31] will retry after 774.139673ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.822655  522827 type.go:168] "Request Body" body=""
	I1217 20:29:44.822811  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:44.823195  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:45.323027  522827 type.go:168] "Request Body" body=""
	I1217 20:29:45.323135  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:45.323507  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:45.323621  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:45.325715  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:45.391952  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:45.395668  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:45.395750  522827 retry.go:31] will retry after 1.529541916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:45.765134  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:45.822845  522827 type.go:168] "Request Body" body=""
	I1217 20:29:45.822973  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:45.823280  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:45.823537  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:45.827173  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:45.827206  522827 retry.go:31] will retry after 637.037829ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:46.322836  522827 type.go:168] "Request Body" body=""
	I1217 20:29:46.322927  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:46.323203  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:46.464492  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:46.525009  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:46.525062  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:46.525083  522827 retry.go:31] will retry after 1.110973738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:46.822245  522827 type.go:168] "Request Body" body=""
	I1217 20:29:46.822323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:46.822662  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:46.926099  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:46.987960  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:46.988006  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:46.988028  522827 retry.go:31] will retry after 1.385710629s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:47.322640  522827 type.go:168] "Request Body" body=""
	I1217 20:29:47.322715  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:47.323041  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:47.636709  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:47.697205  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:47.697243  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:47.697264  522827 retry.go:31] will retry after 4.090194732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:47.822497  522827 type.go:168] "Request Body" body=""
	I1217 20:29:47.822589  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:47.822932  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:47.822989  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:48.322659  522827 type.go:168] "Request Body" body=""
	I1217 20:29:48.322736  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:48.323019  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:48.374352  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:48.431979  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:48.435409  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:48.435442  522827 retry.go:31] will retry after 3.099398493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:48.823142  522827 type.go:168] "Request Body" body=""
	I1217 20:29:48.823220  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:48.823522  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:49.322226  522827 type.go:168] "Request Body" body=""
	I1217 20:29:49.322316  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:49.322645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:49.822280  522827 type.go:168] "Request Body" body=""
	I1217 20:29:49.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:49.822709  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:50.322247  522827 type.go:168] "Request Body" body=""
	I1217 20:29:50.322328  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:50.322668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:50.322721  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:50.822373  522827 type.go:168] "Request Body" body=""
	I1217 20:29:50.822449  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:50.822719  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:51.322273  522827 type.go:168] "Request Body" body=""
	I1217 20:29:51.322361  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:51.322682  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:51.535119  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:51.608419  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:51.608461  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:51.608504  522827 retry.go:31] will retry after 5.948755722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:51.787984  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:51.822458  522827 type.go:168] "Request Body" body=""
	I1217 20:29:51.822538  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:51.822817  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:51.846041  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:51.846085  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:51.846105  522827 retry.go:31] will retry after 5.856724643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:52.322893  522827 type.go:168] "Request Body" body=""
	I1217 20:29:52.322982  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:52.323271  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:52.323320  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:52.822254  522827 type.go:168] "Request Body" body=""
	I1217 20:29:52.822329  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:52.822653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:53.322391  522827 type.go:168] "Request Body" body=""
	I1217 20:29:53.322479  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:53.322825  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:53.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:29:53.822273  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:53.822595  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:54.322265  522827 type.go:168] "Request Body" body=""
	I1217 20:29:54.322351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:54.322683  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:54.822243  522827 type.go:168] "Request Body" body=""
	I1217 20:29:54.822322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:54.822646  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:54.822705  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:55.322383  522827 type.go:168] "Request Body" body=""
	I1217 20:29:55.322466  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:55.322739  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:55.822262  522827 type.go:168] "Request Body" body=""
	I1217 20:29:55.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:55.822680  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:56.322404  522827 type.go:168] "Request Body" body=""
	I1217 20:29:56.322493  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:56.322874  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:56.822564  522827 type.go:168] "Request Body" body=""
	I1217 20:29:56.822678  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:56.823046  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:56.823109  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:57.322771  522827 type.go:168] "Request Body" body=""
	I1217 20:29:57.322846  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:57.323141  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:57.557506  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:57.638482  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:57.642516  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:57.642548  522827 retry.go:31] will retry after 4.405911356s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:57.703796  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:57.764881  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:57.764928  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:57.764950  522827 retry.go:31] will retry after 7.580168113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:57.823128  522827 type.go:168] "Request Body" body=""
	I1217 20:29:57.823235  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:57.823556  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:58.322216  522827 type.go:168] "Request Body" body=""
	I1217 20:29:58.322291  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:58.322579  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:58.822288  522827 type.go:168] "Request Body" body=""
	I1217 20:29:58.822434  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:58.822838  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:59.322555  522827 type.go:168] "Request Body" body=""
	I1217 20:29:59.322632  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:59.322948  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:59.323004  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:59.822770  522827 type.go:168] "Request Body" body=""
	I1217 20:29:59.822844  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:59.823119  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:00.323032  522827 type.go:168] "Request Body" body=""
	I1217 20:30:00.323116  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:00.323489  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:00.822257  522827 type.go:168] "Request Body" body=""
	I1217 20:30:00.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:00.822678  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:01.322375  522827 type.go:168] "Request Body" body=""
	I1217 20:30:01.322459  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:01.322808  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:01.822288  522827 type.go:168] "Request Body" body=""
	I1217 20:30:01.822382  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:01.822690  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:01.822741  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:02.049201  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:30:02.136097  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:02.136138  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:02.136156  522827 retry.go:31] will retry after 5.567678678s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:02.322750  522827 type.go:168] "Request Body" body=""
	I1217 20:30:02.322843  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:02.323173  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:02.822939  522827 type.go:168] "Request Body" body=""
	I1217 20:30:02.823008  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:02.823350  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:03.323175  522827 type.go:168] "Request Body" body=""
	I1217 20:30:03.323258  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:03.323612  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:03.822172  522827 type.go:168] "Request Body" body=""
	I1217 20:30:03.822257  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:03.822603  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:04.322314  522827 type.go:168] "Request Body" body=""
	I1217 20:30:04.322401  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:04.322723  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:04.322781  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:04.822229  522827 type.go:168] "Request Body" body=""
	I1217 20:30:04.822306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:04.822675  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:05.322398  522827 type.go:168] "Request Body" body=""
	I1217 20:30:05.322478  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:05.322839  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:05.346115  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:30:05.408232  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:05.408289  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:05.408313  522827 retry.go:31] will retry after 10.078206747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:05.822854  522827 type.go:168] "Request Body" body=""
	I1217 20:30:05.822945  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:05.823317  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:06.323102  522827 type.go:168] "Request Body" body=""
	I1217 20:30:06.323172  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:06.323471  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:06.323519  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:06.822291  522827 type.go:168] "Request Body" body=""
	I1217 20:30:06.822371  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:06.822701  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:07.322790  522827 type.go:168] "Request Body" body=""
	I1217 20:30:07.322867  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:07.323162  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:07.703974  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:30:07.764647  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:07.764701  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:07.764721  522827 retry.go:31] will retry after 19.009086903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:07.822843  522827 type.go:168] "Request Body" body=""
	I1217 20:30:07.822915  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:07.823267  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:08.323079  522827 type.go:168] "Request Body" body=""
	I1217 20:30:08.323151  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:08.323471  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:08.822188  522827 type.go:168] "Request Body" body=""
	I1217 20:30:08.822263  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:08.822521  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:08.822572  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:09.322241  522827 type.go:168] "Request Body" body=""
	I1217 20:30:09.322340  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:09.322671  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:09.822374  522827 type.go:168] "Request Body" body=""
	I1217 20:30:09.822457  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:09.822805  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:10.322483  522827 type.go:168] "Request Body" body=""
	I1217 20:30:10.322552  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:10.322843  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:10.822281  522827 type.go:168] "Request Body" body=""
	I1217 20:30:10.822358  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:10.822652  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:10.822700  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:11.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:30:11.322352  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:11.322672  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:11.822207  522827 type.go:168] "Request Body" body=""
	I1217 20:30:11.822286  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:11.822549  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:12.322594  522827 type.go:168] "Request Body" body=""
	I1217 20:30:12.322674  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:12.322988  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:12.822976  522827 type.go:168] "Request Body" body=""
	I1217 20:30:12.823050  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:12.823410  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:12.823463  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:13.322144  522827 type.go:168] "Request Body" body=""
	I1217 20:30:13.322232  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:13.322521  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:13.822230  522827 type.go:168] "Request Body" body=""
	I1217 20:30:13.822307  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:13.822623  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:14.322243  522827 type.go:168] "Request Body" body=""
	I1217 20:30:14.322319  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:14.322666  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:14.822203  522827 type.go:168] "Request Body" body=""
	I1217 20:30:14.822311  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:14.822605  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:15.322246  522827 type.go:168] "Request Body" body=""
	I1217 20:30:15.322320  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:15.322647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:15.322700  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:15.487149  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:30:15.557091  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:15.557136  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:15.557155  522827 retry.go:31] will retry after 12.964696684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:15.822271  522827 type.go:168] "Request Body" body=""
	I1217 20:30:15.822351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:15.822662  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:16.322350  522827 type.go:168] "Request Body" body=""
	I1217 20:30:16.322453  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:16.322760  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:16.822273  522827 type.go:168] "Request Body" body=""
	I1217 20:30:16.822358  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:16.822736  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:17.322687  522827 type.go:168] "Request Body" body=""
	I1217 20:30:17.322762  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:17.323107  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:17.323170  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:17.822929  522827 type.go:168] "Request Body" body=""
	I1217 20:30:17.823010  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:17.823369  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:18.322156  522827 type.go:168] "Request Body" body=""
	I1217 20:30:18.322228  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:18.322549  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:18.822291  522827 type.go:168] "Request Body" body=""
	I1217 20:30:18.822375  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:18.822749  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:19.322331  522827 type.go:168] "Request Body" body=""
	I1217 20:30:19.322407  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:19.322669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:19.822262  522827 type.go:168] "Request Body" body=""
	I1217 20:30:19.822338  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:19.822665  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:19.822723  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:20.322409  522827 type.go:168] "Request Body" body=""
	I1217 20:30:20.322504  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:20.322816  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:20.822195  522827 type.go:168] "Request Body" body=""
	I1217 20:30:20.822272  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:20.822589  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:21.322282  522827 type.go:168] "Request Body" body=""
	I1217 20:30:21.322360  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:21.322714  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:21.822462  522827 type.go:168] "Request Body" body=""
	I1217 20:30:21.822537  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:21.822878  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:21.822935  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:22.322758  522827 type.go:168] "Request Body" body=""
	I1217 20:30:22.322831  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:22.323144  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:22.823099  522827 type.go:168] "Request Body" body=""
	I1217 20:30:22.823175  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:22.823543  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:23.322157  522827 type.go:168] "Request Body" body=""
	I1217 20:30:23.322244  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:23.322584  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:23.822276  522827 type.go:168] "Request Body" body=""
	I1217 20:30:23.822380  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:23.822675  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:24.322366  522827 type.go:168] "Request Body" body=""
	I1217 20:30:24.322440  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:24.322775  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:24.322830  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:24.822521  522827 type.go:168] "Request Body" body=""
	I1217 20:30:24.822606  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:24.822923  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:25.322198  522827 type.go:168] "Request Body" body=""
	I1217 20:30:25.322283  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:25.322621  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:25.822315  522827 type.go:168] "Request Body" body=""
	I1217 20:30:25.822391  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:25.822741  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:26.322318  522827 type.go:168] "Request Body" body=""
	I1217 20:30:26.322399  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:26.322716  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:26.774084  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:30:26.822641  522827 type.go:168] "Request Body" body=""
	I1217 20:30:26.822719  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:26.822976  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:26.823028  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:26.837910  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:26.841500  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:26.841530  522827 retry.go:31] will retry after 11.131595667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:27.322446  522827 type.go:168] "Request Body" body=""
	I1217 20:30:27.322527  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:27.322849  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:27.822542  522827 type.go:168] "Request Body" body=""
	I1217 20:30:27.822619  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:27.822938  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:28.322188  522827 type.go:168] "Request Body" body=""
	I1217 20:30:28.322255  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:28.322560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:28.523062  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:30:28.580613  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:28.584486  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:28.584522  522827 retry.go:31] will retry after 27.188888106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:28.822927  522827 type.go:168] "Request Body" body=""
	I1217 20:30:28.823014  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:28.823356  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:28.823415  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:29.323074  522827 type.go:168] "Request Body" body=""
	I1217 20:30:29.323146  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:29.323504  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:29.822233  522827 type.go:168] "Request Body" body=""
	I1217 20:30:29.822317  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:29.822584  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:30.322255  522827 type.go:168] "Request Body" body=""
	I1217 20:30:30.322340  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:30.322682  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:30.822323  522827 type.go:168] "Request Body" body=""
	I1217 20:30:30.822402  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:30.822702  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:31.322380  522827 type.go:168] "Request Body" body=""
	I1217 20:30:31.322461  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:31.322751  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:31.322805  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:31.822248  522827 type.go:168] "Request Body" body=""
	I1217 20:30:31.822328  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:31.822687  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:32.322527  522827 type.go:168] "Request Body" body=""
	I1217 20:30:32.322604  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:32.322970  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:32.822789  522827 type.go:168] "Request Body" body=""
	I1217 20:30:32.822862  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:32.823113  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:33.322853  522827 type.go:168] "Request Body" body=""
	I1217 20:30:33.322933  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:33.323261  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:33.323318  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:33.823136  522827 type.go:168] "Request Body" body=""
	I1217 20:30:33.823234  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:33.823604  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:34.322280  522827 type.go:168] "Request Body" body=""
	I1217 20:30:34.322349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:34.322657  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:34.822228  522827 type.go:168] "Request Body" body=""
	I1217 20:30:34.822303  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:34.822672  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:35.322420  522827 type.go:168] "Request Body" body=""
	I1217 20:30:35.322511  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:35.322908  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:35.822529  522827 type.go:168] "Request Body" body=""
	I1217 20:30:35.822596  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:35.822853  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:35.822892  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:36.322252  522827 type.go:168] "Request Body" body=""
	I1217 20:30:36.322343  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:36.322684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:36.822246  522827 type.go:168] "Request Body" body=""
	I1217 20:30:36.822323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:36.822689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:37.322549  522827 type.go:168] "Request Body" body=""
	I1217 20:30:37.322619  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:37.322889  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:37.822237  522827 type.go:168] "Request Body" body=""
	I1217 20:30:37.822321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:37.822668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:37.974039  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:30:38.040817  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:38.040869  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:38.040889  522827 retry.go:31] will retry after 31.049103728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:38.322172  522827 type.go:168] "Request Body" body=""
	I1217 20:30:38.322246  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:38.322560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:38.322614  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:38.822324  522827 type.go:168] "Request Body" body=""
	I1217 20:30:38.822398  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:38.822724  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:39.322269  522827 type.go:168] "Request Body" body=""
	I1217 20:30:39.322343  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:39.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:39.822351  522827 type.go:168] "Request Body" body=""
	I1217 20:30:39.822429  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:39.822784  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:40.322493  522827 type.go:168] "Request Body" body=""
	I1217 20:30:40.322565  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:40.322832  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:40.322881  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:40.822257  522827 type.go:168] "Request Body" body=""
	I1217 20:30:40.822336  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:40.822683  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:41.322402  522827 type.go:168] "Request Body" body=""
	I1217 20:30:41.322476  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:41.322798  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:41.822322  522827 type.go:168] "Request Body" body=""
	I1217 20:30:41.822410  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:41.822673  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:42.322668  522827 type.go:168] "Request Body" body=""
	I1217 20:30:42.322753  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:42.323078  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:42.323134  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:42.822880  522827 type.go:168] "Request Body" body=""
	I1217 20:30:42.822964  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:42.823451  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:43.322210  522827 type.go:168] "Request Body" body=""
	I1217 20:30:43.322284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:43.322583  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:43.822235  522827 type.go:168] "Request Body" body=""
	I1217 20:30:43.822309  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:43.822654  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:44.322386  522827 type.go:168] "Request Body" body=""
	I1217 20:30:44.322461  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:44.322790  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:44.822318  522827 type.go:168] "Request Body" body=""
	I1217 20:30:44.822384  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:44.822682  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:44.822724  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:45.322416  522827 type.go:168] "Request Body" body=""
	I1217 20:30:45.322496  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:45.322829  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:45.822255  522827 type.go:168] "Request Body" body=""
	I1217 20:30:45.822332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:45.822669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:46.322325  522827 type.go:168] "Request Body" body=""
	I1217 20:30:46.322400  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:46.322665  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:46.822380  522827 type.go:168] "Request Body" body=""
	I1217 20:30:46.822473  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:46.822817  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:46.822872  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:47.322661  522827 type.go:168] "Request Body" body=""
	I1217 20:30:47.322735  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:47.323065  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:47.822781  522827 type.go:168] "Request Body" body=""
	I1217 20:30:47.822857  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:47.823119  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:48.322897  522827 type.go:168] "Request Body" body=""
	I1217 20:30:48.322974  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:48.323345  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:48.823144  522827 type.go:168] "Request Body" body=""
	I1217 20:30:48.823234  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:48.823560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:48.823640  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:49.322261  522827 type.go:168] "Request Body" body=""
	I1217 20:30:49.322332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:49.322595  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:49.822322  522827 type.go:168] "Request Body" body=""
	I1217 20:30:49.822426  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:49.822794  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:50.322509  522827 type.go:168] "Request Body" body=""
	I1217 20:30:50.322590  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:50.322932  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:50.822546  522827 type.go:168] "Request Body" body=""
	I1217 20:30:50.822615  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:50.822946  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:51.322643  522827 type.go:168] "Request Body" body=""
	I1217 20:30:51.322718  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:51.323024  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:51.323070  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:51.822296  522827 type.go:168] "Request Body" body=""
	I1217 20:30:51.822374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:51.822687  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:52.322694  522827 type.go:168] "Request Body" body=""
	I1217 20:30:52.322784  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:52.323124  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:52.823081  522827 type.go:168] "Request Body" body=""
	I1217 20:30:52.823156  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:52.823526  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:53.322244  522827 type.go:168] "Request Body" body=""
	I1217 20:30:53.322320  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:53.322655  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:53.822344  522827 type.go:168] "Request Body" body=""
	I1217 20:30:53.822418  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:53.822697  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:53.822743  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:54.322237  522827 type.go:168] "Request Body" body=""
	I1217 20:30:54.322316  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:54.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:54.822361  522827 type.go:168] "Request Body" body=""
	I1217 20:30:54.822444  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:54.822766  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:55.322219  522827 type.go:168] "Request Body" body=""
	I1217 20:30:55.322295  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:55.322552  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:55.774295  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:30:55.822774  522827 type.go:168] "Request Body" body=""
	I1217 20:30:55.822854  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:55.823178  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:55.823237  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:55.835665  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:55.835703  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:55.835722  522827 retry.go:31] will retry after 28.301795669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:56.322365  522827 type.go:168] "Request Body" body=""
	I1217 20:30:56.322444  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:56.322778  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:56.822439  522827 type.go:168] "Request Body" body=""
	I1217 20:30:56.822508  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:56.822820  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:57.322747  522827 type.go:168] "Request Body" body=""
	I1217 20:30:57.322819  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:57.323147  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:57.822918  522827 type.go:168] "Request Body" body=""
	I1217 20:30:57.822997  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:57.823341  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:57.823393  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:58.322987  522827 type.go:168] "Request Body" body=""
	I1217 20:30:58.323064  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:58.323342  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:58.823128  522827 type.go:168] "Request Body" body=""
	I1217 20:30:58.823221  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:58.823576  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:59.322297  522827 type.go:168] "Request Body" body=""
	I1217 20:30:59.322372  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:59.322714  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:59.822279  522827 type.go:168] "Request Body" body=""
	I1217 20:30:59.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:59.822614  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:00.322456  522827 type.go:168] "Request Body" body=""
	I1217 20:31:00.322571  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:00.322881  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:00.322948  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:00.822606  522827 type.go:168] "Request Body" body=""
	I1217 20:31:00.822685  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:00.823029  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:01.322805  522827 type.go:168] "Request Body" body=""
	I1217 20:31:01.322882  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:01.323144  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:01.822946  522827 type.go:168] "Request Body" body=""
	I1217 20:31:01.823024  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:01.823411  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:02.322195  522827 type.go:168] "Request Body" body=""
	I1217 20:31:02.322276  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:02.322604  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:02.822463  522827 type.go:168] "Request Body" body=""
	I1217 20:31:02.822531  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:02.822797  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:02.822839  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:03.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:31:03.322304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:03.322643  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:03.822222  522827 type.go:168] "Request Body" body=""
	I1217 20:31:03.822303  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:03.822665  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:04.322217  522827 type.go:168] "Request Body" body=""
	I1217 20:31:04.322297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:04.322674  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:04.822410  522827 type.go:168] "Request Body" body=""
	I1217 20:31:04.822489  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:04.822833  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:04.822889  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:05.322559  522827 type.go:168] "Request Body" body=""
	I1217 20:31:05.322643  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:05.323009  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:05.822714  522827 type.go:168] "Request Body" body=""
	I1217 20:31:05.822789  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:05.823090  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:06.322858  522827 type.go:168] "Request Body" body=""
	I1217 20:31:06.322935  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:06.323252  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:06.823001  522827 type.go:168] "Request Body" body=""
	I1217 20:31:06.823088  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:06.823427  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:06.823482  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:07.322676  522827 type.go:168] "Request Body" body=""
	I1217 20:31:07.322779  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:07.323088  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:07.822882  522827 type.go:168] "Request Body" body=""
	I1217 20:31:07.822978  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:07.823462  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:08.322191  522827 type.go:168] "Request Body" body=""
	I1217 20:31:08.322284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:08.322582  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:08.822182  522827 type.go:168] "Request Body" body=""
	I1217 20:31:08.822262  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:08.822524  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:09.091155  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:31:09.152330  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:31:09.155944  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:31:09.156044  522827 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 20:31:09.322225  522827 type.go:168] "Request Body" body=""
	I1217 20:31:09.322322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:09.322666  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:09.322722  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:09.822389  522827 type.go:168] "Request Body" body=""
	I1217 20:31:09.822485  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:09.822808  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:10.322485  522827 type.go:168] "Request Body" body=""
	I1217 20:31:10.322557  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:10.322813  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:10.822228  522827 type.go:168] "Request Body" body=""
	I1217 20:31:10.822305  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:10.822670  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:11.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:31:11.322323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:11.322659  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:11.822317  522827 type.go:168] "Request Body" body=""
	I1217 20:31:11.822395  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:11.822653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:11.822709  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:12.322704  522827 type.go:168] "Request Body" body=""
	I1217 20:31:12.322778  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:12.323076  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:12.822968  522827 type.go:168] "Request Body" body=""
	I1217 20:31:12.823050  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:12.823387  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:13.323001  522827 type.go:168] "Request Body" body=""
	I1217 20:31:13.323088  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:13.323368  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:13.823235  522827 type.go:168] "Request Body" body=""
	I1217 20:31:13.823315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:13.823670  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:13.823726  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:14.322222  522827 type.go:168] "Request Body" body=""
	I1217 20:31:14.322295  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:14.322647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:14.822221  522827 type.go:168] "Request Body" body=""
	I1217 20:31:14.822300  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:14.822581  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:15.322323  522827 type.go:168] "Request Body" body=""
	I1217 20:31:15.322403  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:15.322715  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:15.822407  522827 type.go:168] "Request Body" body=""
	I1217 20:31:15.822512  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:15.822811  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:16.322304  522827 type.go:168] "Request Body" body=""
	I1217 20:31:16.322385  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:16.322637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:16.322683  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:16.822297  522827 type.go:168] "Request Body" body=""
	I1217 20:31:16.822416  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:16.822748  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:17.322737  522827 type.go:168] "Request Body" body=""
	I1217 20:31:17.322810  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:17.323096  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:17.822837  522827 type.go:168] "Request Body" body=""
	I1217 20:31:17.822931  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:17.823257  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:18.323065  522827 type.go:168] "Request Body" body=""
	I1217 20:31:18.323140  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:18.323508  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:18.323570  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:18.822258  522827 type.go:168] "Request Body" body=""
	I1217 20:31:18.822342  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:18.822689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:19.322395  522827 type.go:168] "Request Body" body=""
	I1217 20:31:19.322475  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:19.322822  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:19.822246  522827 type.go:168] "Request Body" body=""
	I1217 20:31:19.822344  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:19.822647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:20.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:31:20.322363  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:20.322714  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:20.822389  522827 type.go:168] "Request Body" body=""
	I1217 20:31:20.822466  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:20.822785  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:20.822834  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:21.322233  522827 type.go:168] "Request Body" body=""
	I1217 20:31:21.322331  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:21.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:21.822347  522827 type.go:168] "Request Body" body=""
	I1217 20:31:21.822422  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:21.822747  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:22.322631  522827 type.go:168] "Request Body" body=""
	I1217 20:31:22.322703  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:22.322965  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:22.822936  522827 type.go:168] "Request Body" body=""
	I1217 20:31:22.823012  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:22.823323  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:22.823370  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:23.323099  522827 type.go:168] "Request Body" body=""
	I1217 20:31:23.323180  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:23.323479  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:23.822130  522827 type.go:168] "Request Body" body=""
	I1217 20:31:23.822204  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:23.822471  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:24.138134  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:31:24.201991  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:31:24.202036  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:31:24.202117  522827 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 20:31:24.205262  522827 out.go:179] * Enabled addons: 
	I1217 20:31:24.208903  522827 addons.go:530] duration metric: took 1m41.560475312s for enable addons: enabled=[]
	I1217 20:31:24.322240  522827 type.go:168] "Request Body" body=""
	I1217 20:31:24.322318  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:24.322668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:24.822384  522827 type.go:168] "Request Body" body=""
	I1217 20:31:24.822478  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:24.822815  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:25.322366  522827 type.go:168] "Request Body" body=""
	I1217 20:31:25.322441  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:25.322753  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:25.322800  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:25.822458  522827 type.go:168] "Request Body" body=""
	I1217 20:31:25.822532  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:25.822902  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:26.322508  522827 type.go:168] "Request Body" body=""
	I1217 20:31:26.322584  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:26.322912  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:26.822194  522827 type.go:168] "Request Body" body=""
	I1217 20:31:26.822272  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:26.822592  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:27.322423  522827 type.go:168] "Request Body" body=""
	I1217 20:31:27.322530  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:27.322841  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:27.322894  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:27.822547  522827 type.go:168] "Request Body" body=""
	I1217 20:31:27.822621  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:27.822984  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:28.322302  522827 type.go:168] "Request Body" body=""
	I1217 20:31:28.322385  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:28.322685  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:28.822382  522827 type.go:168] "Request Body" body=""
	I1217 20:31:28.822464  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:28.822833  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:29.322567  522827 type.go:168] "Request Body" body=""
	I1217 20:31:29.322643  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:29.322987  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:29.323043  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:29.822734  522827 type.go:168] "Request Body" body=""
	I1217 20:31:29.822807  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:29.823076  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:30.322834  522827 type.go:168] "Request Body" body=""
	I1217 20:31:30.322906  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:30.323262  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:30.823096  522827 type.go:168] "Request Body" body=""
	I1217 20:31:30.823184  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:30.823505  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:31.322243  522827 type.go:168] "Request Body" body=""
	I1217 20:31:31.322319  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:31.322606  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:31.822221  522827 type.go:168] "Request Body" body=""
	I1217 20:31:31.822295  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:31.822614  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:31.822668  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:32.322585  522827 type.go:168] "Request Body" body=""
	I1217 20:31:32.322665  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:32.322989  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:32.822991  522827 type.go:168] "Request Body" body=""
	I1217 20:31:32.823063  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:32.823325  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:33.323053  522827 type.go:168] "Request Body" body=""
	I1217 20:31:33.323151  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:33.323496  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:33.822867  522827 type.go:168] "Request Body" body=""
	I1217 20:31:33.822946  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:33.823324  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:33.823391  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:34.323215  522827 type.go:168] "Request Body" body=""
	I1217 20:31:34.323300  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:34.323630  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:34.822311  522827 type.go:168] "Request Body" body=""
	I1217 20:31:34.822386  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:34.822717  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:35.322215  522827 type.go:168] "Request Body" body=""
	I1217 20:31:35.322293  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:35.322668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:35.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:31:35.822284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:35.822539  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:36.322256  522827 type.go:168] "Request Body" body=""
	I1217 20:31:36.322344  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:36.322708  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:36.322778  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:36.822306  522827 type.go:168] "Request Body" body=""
	I1217 20:31:36.822387  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:36.822729  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:37.322707  522827 type.go:168] "Request Body" body=""
	I1217 20:31:37.322775  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:37.323029  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:37.822287  522827 type.go:168] "Request Body" body=""
	I1217 20:31:37.822373  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:37.823676  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 20:31:38.322400  522827 type.go:168] "Request Body" body=""
	I1217 20:31:38.322477  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:38.322802  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:38.322850  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:38.822462  522827 type.go:168] "Request Body" body=""
	I1217 20:31:38.822552  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:38.822813  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:39.322538  522827 type.go:168] "Request Body" body=""
	I1217 20:31:39.322613  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:39.322992  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:39.822813  522827 type.go:168] "Request Body" body=""
	I1217 20:31:39.822889  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:39.823220  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:40.322969  522827 type.go:168] "Request Body" body=""
	I1217 20:31:40.323049  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:40.323311  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:40.323365  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:40.823132  522827 type.go:168] "Request Body" body=""
	I1217 20:31:40.823213  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:40.823556  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:41.322295  522827 type.go:168] "Request Body" body=""
	I1217 20:31:41.322379  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:41.322741  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:41.822257  522827 type.go:168] "Request Body" body=""
	I1217 20:31:41.822325  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:41.822584  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:42.322236  522827 type.go:168] "Request Body" body=""
	I1217 20:31:42.322354  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:42.322714  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:42.822359  522827 type.go:168] "Request Body" body=""
	I1217 20:31:42.822447  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:42.822773  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:42.822824  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:43.322198  522827 type.go:168] "Request Body" body=""
	I1217 20:31:43.322269  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:43.322552  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:43.822223  522827 type.go:168] "Request Body" body=""
	I1217 20:31:43.822299  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:43.822646  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:44.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:31:44.322310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:44.322646  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:44.822277  522827 type.go:168] "Request Body" body=""
	I1217 20:31:44.822351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:44.822615  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:45.322247  522827 type.go:168] "Request Body" body=""
	I1217 20:31:45.322349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:45.322649  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:45.322699  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:45.822364  522827 type.go:168] "Request Body" body=""
	I1217 20:31:45.822446  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:45.822791  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:46.322336  522827 type.go:168] "Request Body" body=""
	I1217 20:31:46.322408  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:46.322712  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:46.822435  522827 type.go:168] "Request Body" body=""
	I1217 20:31:46.822522  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:46.822879  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:47.322808  522827 type.go:168] "Request Body" body=""
	I1217 20:31:47.322888  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:47.323217  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:47.323277  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:47.823026  522827 type.go:168] "Request Body" body=""
	I1217 20:31:47.823100  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:47.823372  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:48.323164  522827 type.go:168] "Request Body" body=""
	I1217 20:31:48.323244  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:48.323562  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:48.822251  522827 type.go:168] "Request Body" body=""
	I1217 20:31:48.822329  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:48.822696  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:49.322381  522827 type.go:168] "Request Body" body=""
	I1217 20:31:49.322457  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:49.322785  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:49.822503  522827 type.go:168] "Request Body" body=""
	I1217 20:31:49.822582  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:49.822896  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:49.822946  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:50.322276  522827 type.go:168] "Request Body" body=""
	I1217 20:31:50.322366  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:50.322737  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:50.822198  522827 type.go:168] "Request Body" body=""
	I1217 20:31:50.822270  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:50.822542  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:51.322250  522827 type.go:168] "Request Body" body=""
	I1217 20:31:51.322336  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:51.322688  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:51.822279  522827 type.go:168] "Request Body" body=""
	I1217 20:31:51.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:51.822647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:52.322172  522827 type.go:168] "Request Body" body=""
	I1217 20:31:52.322269  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:52.322529  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:52.322584  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:52.822307  522827 type.go:168] "Request Body" body=""
	I1217 20:31:52.822381  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:52.822703  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:53.322352  522827 type.go:168] "Request Body" body=""
	I1217 20:31:53.322443  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:53.322765  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:53.822450  522827 type.go:168] "Request Body" body=""
	I1217 20:31:53.822519  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:53.822836  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:54.322259  522827 type.go:168] "Request Body" body=""
	I1217 20:31:54.322342  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:54.322677  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:54.322737  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:54.822413  522827 type.go:168] "Request Body" body=""
	I1217 20:31:54.822500  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:54.822844  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:55.322509  522827 type.go:168] "Request Body" body=""
	I1217 20:31:55.322590  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:55.322859  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:55.822209  522827 type.go:168] "Request Body" body=""
	I1217 20:31:55.822286  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:55.822615  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:56.322334  522827 type.go:168] "Request Body" body=""
	I1217 20:31:56.322412  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:56.322700  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:56.822188  522827 type.go:168] "Request Body" body=""
	I1217 20:31:56.822256  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:56.822570  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:56.822617  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:57.322493  522827 type.go:168] "Request Body" body=""
	I1217 20:31:57.322571  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:57.322891  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:57.822474  522827 type.go:168] "Request Body" body=""
	I1217 20:31:57.822550  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:57.822881  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:58.322311  522827 type.go:168] "Request Body" body=""
	I1217 20:31:58.322386  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:58.322639  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:58.822249  522827 type.go:168] "Request Body" body=""
	I1217 20:31:58.822326  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:58.822659  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:58.822714  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:59.322242  522827 type.go:168] "Request Body" body=""
	I1217 20:31:59.322321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:59.322689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:59.822316  522827 type.go:168] "Request Body" body=""
	I1217 20:31:59.822386  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:59.822717  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:00.322382  522827 type.go:168] "Request Body" body=""
	I1217 20:32:00.322473  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:00.322812  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:00.822290  522827 type.go:168] "Request Body" body=""
	I1217 20:32:00.822374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:00.822752  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:00.822812  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:01.322354  522827 type.go:168] "Request Body" body=""
	I1217 20:32:01.322434  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:01.322743  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:01.822229  522827 type.go:168] "Request Body" body=""
	I1217 20:32:01.822312  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:01.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:02.322687  522827 type.go:168] "Request Body" body=""
	I1217 20:32:02.322779  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:02.323110  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:02.823078  522827 type.go:168] "Request Body" body=""
	I1217 20:32:02.823185  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:02.823454  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:02.823500  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:03.322198  522827 type.go:168] "Request Body" body=""
	I1217 20:32:03.322280  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:03.322619  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:03.822356  522827 type.go:168] "Request Body" body=""
	I1217 20:32:03.822434  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:03.822736  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:04.322315  522827 type.go:168] "Request Body" body=""
	I1217 20:32:04.322389  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:04.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:04.822260  522827 type.go:168] "Request Body" body=""
	I1217 20:32:04.822366  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:04.822762  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:05.322471  522827 type.go:168] "Request Body" body=""
	I1217 20:32:05.322560  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:05.322916  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:05.322977  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:05.822615  522827 type.go:168] "Request Body" body=""
	I1217 20:32:05.822691  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:05.823031  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:06.322818  522827 type.go:168] "Request Body" body=""
	I1217 20:32:06.322895  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:06.323223  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:06.822995  522827 type.go:168] "Request Body" body=""
	I1217 20:32:06.823069  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:06.823419  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:07.322171  522827 type.go:168] "Request Body" body=""
	I1217 20:32:07.322242  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:07.322555  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:07.822236  522827 type.go:168] "Request Body" body=""
	I1217 20:32:07.822316  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:07.822639  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:07.822694  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:08.322234  522827 type.go:168] "Request Body" body=""
	I1217 20:32:08.322313  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:08.322610  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:08.822290  522827 type.go:168] "Request Body" body=""
	I1217 20:32:08.822368  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:08.822630  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:09.322201  522827 type.go:168] "Request Body" body=""
	I1217 20:32:09.322283  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:09.322629  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:09.822331  522827 type.go:168] "Request Body" body=""
	I1217 20:32:09.822412  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:09.822739  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:09.822812  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:10.322224  522827 type.go:168] "Request Body" body=""
	I1217 20:32:10.322297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:10.322657  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:10.822387  522827 type.go:168] "Request Body" body=""
	I1217 20:32:10.822470  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:10.822875  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:11.322246  522827 type.go:168] "Request Body" body=""
	I1217 20:32:11.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:11.322696  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:11.822377  522827 type.go:168] "Request Body" body=""
	I1217 20:32:11.822447  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:11.822730  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:12.322684  522827 type.go:168] "Request Body" body=""
	I1217 20:32:12.322757  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:12.323075  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:12.323135  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:12.823123  522827 type.go:168] "Request Body" body=""
	I1217 20:32:12.823215  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:12.823567  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:13.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:32:13.322361  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:13.322653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:13.822251  522827 type.go:168] "Request Body" body=""
	I1217 20:32:13.822330  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:13.822693  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:14.322242  522827 type.go:168] "Request Body" body=""
	I1217 20:32:14.322324  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:14.322673  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:14.822355  522827 type.go:168] "Request Body" body=""
	I1217 20:32:14.822428  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:14.822689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:14.822736  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:15.322245  522827 type.go:168] "Request Body" body=""
	I1217 20:32:15.322322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:15.322646  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:15.822224  522827 type.go:168] "Request Body" body=""
	I1217 20:32:15.822301  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:15.822625  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:16.322176  522827 type.go:168] "Request Body" body=""
	I1217 20:32:16.322257  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:16.322573  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:16.822265  522827 type.go:168] "Request Body" body=""
	I1217 20:32:16.822341  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:16.822656  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:17.322600  522827 type.go:168] "Request Body" body=""
	I1217 20:32:17.322693  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:17.323051  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:17.323108  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:17.822821  522827 type.go:168] "Request Body" body=""
	I1217 20:32:17.822890  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:17.823195  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:18.322987  522827 type.go:168] "Request Body" body=""
	I1217 20:32:18.323062  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:18.323387  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:18.823193  522827 type.go:168] "Request Body" body=""
	I1217 20:32:18.823271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:18.823632  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:19.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:32:19.322300  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:19.322563  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:19.822248  522827 type.go:168] "Request Body" body=""
	I1217 20:32:19.822332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:19.822689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:19.822743  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:20.322270  522827 type.go:168] "Request Body" body=""
	I1217 20:32:20.322351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:20.322706  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:20.822403  522827 type.go:168] "Request Body" body=""
	I1217 20:32:20.822483  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:20.822759  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:21.322436  522827 type.go:168] "Request Body" body=""
	I1217 20:32:21.322518  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:21.322864  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:21.822578  522827 type.go:168] "Request Body" body=""
	I1217 20:32:21.822655  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:21.823020  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:21.823078  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:22.322774  522827 type.go:168] "Request Body" body=""
	I1217 20:32:22.322847  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:22.323116  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:22.823126  522827 type.go:168] "Request Body" body=""
	I1217 20:32:22.823213  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:22.823625  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:23.322331  522827 type.go:168] "Request Body" body=""
	I1217 20:32:23.322407  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:23.322751  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:23.822449  522827 type.go:168] "Request Body" body=""
	I1217 20:32:23.822519  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:23.822856  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:24.322228  522827 type.go:168] "Request Body" body=""
	I1217 20:32:24.322310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:24.322653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:24.322710  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:24.822249  522827 type.go:168] "Request Body" body=""
	I1217 20:32:24.822326  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:24.822711  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:25.322197  522827 type.go:168] "Request Body" body=""
	I1217 20:32:25.322269  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:25.322562  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:25.822261  522827 type.go:168] "Request Body" body=""
	I1217 20:32:25.822338  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:25.822615  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:26.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:32:26.322347  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:26.322669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:26.822294  522827 type.go:168] "Request Body" body=""
	I1217 20:32:26.822375  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:26.822659  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:26.822711  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:27.322690  522827 type.go:168] "Request Body" body=""
	I1217 20:32:27.322770  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:27.323105  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:27.822647  522827 type.go:168] "Request Body" body=""
	I1217 20:32:27.822726  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:27.823033  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:28.322766  522827 type.go:168] "Request Body" body=""
	I1217 20:32:28.322834  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:28.323196  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:28.822977  522827 type.go:168] "Request Body" body=""
	I1217 20:32:28.823055  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:28.823384  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:28.823437  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:29.322124  522827 type.go:168] "Request Body" body=""
	I1217 20:32:29.322205  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:29.322530  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:29.822227  522827 type.go:168] "Request Body" body=""
	I1217 20:32:29.822306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:29.822567  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:30.322239  522827 type.go:168] "Request Body" body=""
	I1217 20:32:30.322320  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:30.322615  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:30.822251  522827 type.go:168] "Request Body" body=""
	I1217 20:32:30.822335  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:30.822684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:31.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:32:31.322297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:31.322575  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:31.322631  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:31.822242  522827 type.go:168] "Request Body" body=""
	I1217 20:32:31.822318  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:31.822645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:32.322646  522827 type.go:168] "Request Body" body=""
	I1217 20:32:32.322717  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:32.323066  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:32.822921  522827 type.go:168] "Request Body" body=""
	I1217 20:32:32.822993  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:32.823283  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:33.323063  522827 type.go:168] "Request Body" body=""
	I1217 20:32:33.323158  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:33.323500  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:33.323569  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:33.822260  522827 type.go:168] "Request Body" body=""
	I1217 20:32:33.822354  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:33.822685  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:34.322405  522827 type.go:168] "Request Body" body=""
	I1217 20:32:34.322478  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:34.322748  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:34.822278  522827 type.go:168] "Request Body" body=""
	I1217 20:32:34.822374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:34.822748  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:35.322476  522827 type.go:168] "Request Body" body=""
	I1217 20:32:35.322570  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:35.322893  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:35.822173  522827 type.go:168] "Request Body" body=""
	I1217 20:32:35.822243  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:35.822502  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:35.822542  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:36.322264  522827 type.go:168] "Request Body" body=""
	I1217 20:32:36.322345  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:36.322701  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:36.822410  522827 type.go:168] "Request Body" body=""
	I1217 20:32:36.822488  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:36.822823  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:37.322668  522827 type.go:168] "Request Body" body=""
	I1217 20:32:37.322737  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:37.322989  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:37.822848  522827 type.go:168] "Request Body" body=""
	I1217 20:32:37.822924  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:37.823275  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:37.823343  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:38.323095  522827 type.go:168] "Request Body" body=""
	I1217 20:32:38.323188  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:38.323541  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:38.822238  522827 type.go:168] "Request Body" body=""
	I1217 20:32:38.822315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:38.822608  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:39.322245  522827 type.go:168] "Request Body" body=""
	I1217 20:32:39.322381  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:39.322729  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:39.822442  522827 type.go:168] "Request Body" body=""
	I1217 20:32:39.822521  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:39.822851  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:40.322537  522827 type.go:168] "Request Body" body=""
	I1217 20:32:40.322611  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:40.322918  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:40.322971  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:40.822252  522827 type.go:168] "Request Body" body=""
	I1217 20:32:40.822327  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:40.822657  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:41.322382  522827 type.go:168] "Request Body" body=""
	I1217 20:32:41.322458  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:41.322791  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:41.822307  522827 type.go:168] "Request Body" body=""
	I1217 20:32:41.822377  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:41.822665  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:42.322693  522827 type.go:168] "Request Body" body=""
	I1217 20:32:42.322766  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:42.323102  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:42.323170  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:42.823022  522827 type.go:168] "Request Body" body=""
	I1217 20:32:42.823123  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:42.823479  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:43.322175  522827 type.go:168] "Request Body" body=""
	I1217 20:32:43.322271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:43.322523  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:43.822319  522827 type.go:168] "Request Body" body=""
	I1217 20:32:43.822415  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:43.822789  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:44.322263  522827 type.go:168] "Request Body" body=""
	I1217 20:32:44.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:44.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:44.822216  522827 type.go:168] "Request Body" body=""
	I1217 20:32:44.822287  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:44.822560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:44.822601  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:45.322398  522827 type.go:168] "Request Body" body=""
	I1217 20:32:45.322606  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:45.323117  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:45.823034  522827 type.go:168] "Request Body" body=""
	I1217 20:32:45.823140  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:45.823517  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:46.322220  522827 type.go:168] "Request Body" body=""
	I1217 20:32:46.322304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:46.322623  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:46.822261  522827 type.go:168] "Request Body" body=""
	I1217 20:32:46.822341  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:46.822687  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:46.822747  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:47.322536  522827 type.go:168] "Request Body" body=""
	I1217 20:32:47.322612  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:47.322939  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:47.822456  522827 type.go:168] "Request Body" body=""
	I1217 20:32:47.822529  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:47.822784  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:48.322237  522827 type.go:168] "Request Body" body=""
	I1217 20:32:48.322319  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:48.322675  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:48.822389  522827 type.go:168] "Request Body" body=""
	I1217 20:32:48.822473  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:48.822819  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:48.822885  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:49.322495  522827 type.go:168] "Request Body" body=""
	I1217 20:32:49.322569  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:49.322865  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:49.822558  522827 type.go:168] "Request Body" body=""
	I1217 20:32:49.822637  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:49.822970  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:50.322764  522827 type.go:168] "Request Body" body=""
	I1217 20:32:50.322842  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:50.323193  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:50.822930  522827 type.go:168] "Request Body" body=""
	I1217 20:32:50.823006  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:50.823301  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:50.823453  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:51.322133  522827 type.go:168] "Request Body" body=""
	I1217 20:32:51.322212  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:51.322566  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:51.822287  522827 type.go:168] "Request Body" body=""
	I1217 20:32:51.822362  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:51.822679  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:52.322645  522827 type.go:168] "Request Body" body=""
	I1217 20:32:52.322727  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:52.323054  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:52.823092  522827 type.go:168] "Request Body" body=""
	I1217 20:32:52.823172  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:52.823505  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:52.823559  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:53.322267  522827 type.go:168] "Request Body" body=""
	I1217 20:32:53.322354  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:53.322691  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:53.822229  522827 type.go:168] "Request Body" body=""
	I1217 20:32:53.822299  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:53.822601  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:54.322252  522827 type.go:168] "Request Body" body=""
	I1217 20:32:54.322338  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:54.322639  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:54.822220  522827 type.go:168] "Request Body" body=""
	I1217 20:32:54.822304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:54.822635  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:55.322307  522827 type.go:168] "Request Body" body=""
	I1217 20:32:55.322374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:55.322666  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:55.322723  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:55.822406  522827 type.go:168] "Request Body" body=""
	I1217 20:32:55.822481  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:55.822818  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:56.322513  522827 type.go:168] "Request Body" body=""
	I1217 20:32:56.322588  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:56.322929  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:56.822610  522827 type.go:168] "Request Body" body=""
	I1217 20:32:56.822683  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:56.823008  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:57.322760  522827 type.go:168] "Request Body" body=""
	I1217 20:32:57.322844  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:57.323218  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:57.323276  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:57.823035  522827 type.go:168] "Request Body" body=""
	I1217 20:32:57.823125  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:57.823456  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:58.322253  522827 type.go:168] "Request Body" body=""
	I1217 20:32:58.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:58.322631  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:58.822231  522827 type.go:168] "Request Body" body=""
	I1217 20:32:58.822313  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:58.822643  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:59.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:32:59.322307  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:59.322642  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:59.822186  522827 type.go:168] "Request Body" body=""
	I1217 20:32:59.822256  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:59.822567  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:59.822624  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:00.322321  522827 type.go:168] "Request Body" body=""
	I1217 20:33:00.322425  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:00.322741  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:00.822652  522827 type.go:168] "Request Body" body=""
	I1217 20:33:00.822731  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:00.823058  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:01.322828  522827 type.go:168] "Request Body" body=""
	I1217 20:33:01.322902  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:01.323234  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:01.823025  522827 type.go:168] "Request Body" body=""
	I1217 20:33:01.823111  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:01.823448  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:01.823507  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:02.322504  522827 type.go:168] "Request Body" body=""
	I1217 20:33:02.322584  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:02.322930  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:02.822578  522827 type.go:168] "Request Body" body=""
	I1217 20:33:02.822653  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:02.822924  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:03.322752  522827 type.go:168] "Request Body" body=""
	I1217 20:33:03.322834  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:03.323161  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:03.822980  522827 type.go:168] "Request Body" body=""
	I1217 20:33:03.823059  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:03.823424  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:04.322126  522827 type.go:168] "Request Body" body=""
	I1217 20:33:04.322197  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:04.322455  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:04.322500  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:04.822206  522827 type.go:168] "Request Body" body=""
	I1217 20:33:04.822286  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:04.822623  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:05.322331  522827 type.go:168] "Request Body" body=""
	I1217 20:33:05.322416  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:05.322767  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:05.822465  522827 type.go:168] "Request Body" body=""
	I1217 20:33:05.822544  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:05.822897  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:06.322236  522827 type.go:168] "Request Body" body=""
	I1217 20:33:06.322318  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:06.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:06.322719  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:06.822389  522827 type.go:168] "Request Body" body=""
	I1217 20:33:06.822469  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:06.822803  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:07.322597  522827 type.go:168] "Request Body" body=""
	I1217 20:33:07.322665  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:07.322926  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:07.822204  522827 type.go:168] "Request Body" body=""
	I1217 20:33:07.822282  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:07.822625  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:08.322315  522827 type.go:168] "Request Body" body=""
	I1217 20:33:08.322394  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:08.322734  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:08.322788  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:08.822200  522827 type.go:168] "Request Body" body=""
	I1217 20:33:08.822275  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:08.822538  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:09.322257  522827 type.go:168] "Request Body" body=""
	I1217 20:33:09.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:09.322703  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:09.822418  522827 type.go:168] "Request Body" body=""
	I1217 20:33:09.822497  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:09.822851  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:10.322301  522827 type.go:168] "Request Body" body=""
	I1217 20:33:10.322371  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:10.322635  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:10.822269  522827 type.go:168] "Request Body" body=""
	I1217 20:33:10.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:10.822626  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:10.822672  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:11.322251  522827 type.go:168] "Request Body" body=""
	I1217 20:33:11.322332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:11.322653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:11.822193  522827 type.go:168] "Request Body" body=""
	I1217 20:33:11.822295  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:11.822606  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:12.322610  522827 type.go:168] "Request Body" body=""
	I1217 20:33:12.322688  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:12.323024  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:12.822814  522827 type.go:168] "Request Body" body=""
	I1217 20:33:12.822898  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:12.823229  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:12.823291  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:13.323028  522827 type.go:168] "Request Body" body=""
	I1217 20:33:13.323108  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:13.323382  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:13.823191  522827 type.go:168] "Request Body" body=""
	I1217 20:33:13.823271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:13.823643  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:14.322366  522827 type.go:168] "Request Body" body=""
	I1217 20:33:14.322445  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:14.322788  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:14.822460  522827 type.go:168] "Request Body" body=""
	I1217 20:33:14.822538  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:14.822850  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:15.322243  522827 type.go:168] "Request Body" body=""
	I1217 20:33:15.322321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:15.322677  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:15.322736  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:15.822256  522827 type.go:168] "Request Body" body=""
	I1217 20:33:15.822335  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:15.822688  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:16.322376  522827 type.go:168] "Request Body" body=""
	I1217 20:33:16.322452  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:16.322776  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:16.822223  522827 type.go:168] "Request Body" body=""
	I1217 20:33:16.822299  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:16.822647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:17.322462  522827 type.go:168] "Request Body" body=""
	I1217 20:33:17.322538  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:17.322921  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:17.322982  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:17.822190  522827 type.go:168] "Request Body" body=""
	I1217 20:33:17.822267  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:17.822594  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:18.322239  522827 type.go:168] "Request Body" body=""
	I1217 20:33:18.322318  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:18.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:18.822360  522827 type.go:168] "Request Body" body=""
	I1217 20:33:18.822447  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:18.822810  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:19.322194  522827 type.go:168] "Request Body" body=""
	I1217 20:33:19.322274  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:19.322540  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:19.822224  522827 type.go:168] "Request Body" body=""
	I1217 20:33:19.822303  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:19.822648  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:19.822702  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:20.322363  522827 type.go:168] "Request Body" body=""
	I1217 20:33:20.322440  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:20.322810  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:20.822186  522827 type.go:168] "Request Body" body=""
	I1217 20:33:20.822289  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:20.822610  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:21.322209  522827 type.go:168] "Request Body" body=""
	I1217 20:33:21.322284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:21.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:21.822355  522827 type.go:168] "Request Body" body=""
	I1217 20:33:21.822454  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:21.822796  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:21.822847  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:22.322602  522827 type.go:168] "Request Body" body=""
	I1217 20:33:22.322708  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:22.322975  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:22.823014  522827 type.go:168] "Request Body" body=""
	I1217 20:33:22.823104  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:22.823484  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:23.322227  522827 type.go:168] "Request Body" body=""
	I1217 20:33:23.322304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:23.322655  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:23.822334  522827 type.go:168] "Request Body" body=""
	I1217 20:33:23.822402  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:23.822683  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:24.322240  522827 type.go:168] "Request Body" body=""
	I1217 20:33:24.322323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:24.322616  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:24.322662  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:24.822300  522827 type.go:168] "Request Body" body=""
	I1217 20:33:24.822380  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:24.822709  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:25.322192  522827 type.go:168] "Request Body" body=""
	I1217 20:33:25.322263  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:25.322513  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:25.822234  522827 type.go:168] "Request Body" body=""
	I1217 20:33:25.822315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:25.822664  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:26.322370  522827 type.go:168] "Request Body" body=""
	I1217 20:33:26.322443  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:26.322762  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:26.322816  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:26.822197  522827 type.go:168] "Request Body" body=""
	I1217 20:33:26.822271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:26.822589  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:27.322602  522827 type.go:168] "Request Body" body=""
	I1217 20:33:27.322684  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:27.323034  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:27.822840  522827 type.go:168] "Request Body" body=""
	I1217 20:33:27.822919  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:27.823295  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:28.323025  522827 type.go:168] "Request Body" body=""
	I1217 20:33:28.323101  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:28.323352  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:28.323391  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:28.823128  522827 type.go:168] "Request Body" body=""
	I1217 20:33:28.823210  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:28.823616  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:29.322300  522827 type.go:168] "Request Body" body=""
	I1217 20:33:29.322374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:29.322713  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:29.822206  522827 type.go:168] "Request Body" body=""
	I1217 20:33:29.822276  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:29.822536  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:30.322280  522827 type.go:168] "Request Body" body=""
	I1217 20:33:30.322356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:30.322680  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:30.822251  522827 type.go:168] "Request Body" body=""
	I1217 20:33:30.822327  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:30.822668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:30.822720  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:31.322220  522827 type.go:168] "Request Body" body=""
	I1217 20:33:31.322288  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:31.322537  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:31.822223  522827 type.go:168] "Request Body" body=""
	I1217 20:33:31.822304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:31.822622  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:32.322649  522827 type.go:168] "Request Body" body=""
	I1217 20:33:32.322726  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:32.323059  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:32.822867  522827 type.go:168] "Request Body" body=""
	I1217 20:33:32.822952  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:32.823248  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:32.823290  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:33.323108  522827 type.go:168] "Request Body" body=""
	I1217 20:33:33.323186  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:33.323537  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:33.822245  522827 type.go:168] "Request Body" body=""
	I1217 20:33:33.822321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:33.822647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:34.322173  522827 type.go:168] "Request Body" body=""
	I1217 20:33:34.322244  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:34.322543  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:34.822227  522827 type.go:168] "Request Body" body=""
	I1217 20:33:34.822310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:34.822637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:35.322393  522827 type.go:168] "Request Body" body=""
	I1217 20:33:35.322479  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:35.322809  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:35.322867  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:35.822191  522827 type.go:168] "Request Body" body=""
	I1217 20:33:35.822262  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:35.822572  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:36.322306  522827 type.go:168] "Request Body" body=""
	I1217 20:33:36.322382  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:36.322717  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:36.822442  522827 type.go:168] "Request Body" body=""
	I1217 20:33:36.822520  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:36.822854  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:37.322749  522827 type.go:168] "Request Body" body=""
	I1217 20:33:37.322816  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:37.323098  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:37.323140  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:37.822974  522827 type.go:168] "Request Body" body=""
	I1217 20:33:37.823045  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:37.823647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:38.322337  522827 type.go:168] "Request Body" body=""
	I1217 20:33:38.322414  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:38.322731  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:38.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:33:38.822273  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:38.822622  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:39.322260  522827 type.go:168] "Request Body" body=""
	I1217 20:33:39.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:39.322710  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:39.822274  522827 type.go:168] "Request Body" body=""
	I1217 20:33:39.822350  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:39.822691  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:39.822743  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:40.322386  522827 type.go:168] "Request Body" body=""
	I1217 20:33:40.322461  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:40.322777  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:40.822263  522827 type.go:168] "Request Body" body=""
	I1217 20:33:40.822339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:40.822619  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:41.322249  522827 type.go:168] "Request Body" body=""
	I1217 20:33:41.322340  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:41.322697  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:41.822374  522827 type.go:168] "Request Body" body=""
	I1217 20:33:41.822453  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:41.822786  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:41.822845  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:42.322518  522827 type.go:168] "Request Body" body=""
	I1217 20:33:42.322620  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:42.323128  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:42.823194  522827 type.go:168] "Request Body" body=""
	I1217 20:33:42.823280  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:42.823645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:43.322176  522827 type.go:168] "Request Body" body=""
	I1217 20:33:43.322242  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:43.322490  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:43.822197  522827 type.go:168] "Request Body" body=""
	I1217 20:33:43.822292  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:43.822663  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:44.322229  522827 type.go:168] "Request Body" body=""
	I1217 20:33:44.322308  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:44.322678  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:44.322735  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:44.822242  522827 type.go:168] "Request Body" body=""
	I1217 20:33:44.822312  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:44.822572  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:45.322246  522827 type.go:168] "Request Body" body=""
	I1217 20:33:45.322321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:45.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:45.822380  522827 type.go:168] "Request Body" body=""
	I1217 20:33:45.822458  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:45.822809  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:46.322495  522827 type.go:168] "Request Body" body=""
	I1217 20:33:46.322574  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:46.322896  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:46.322955  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:46.822620  522827 type.go:168] "Request Body" body=""
	I1217 20:33:46.822697  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:46.823021  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:47.322811  522827 type.go:168] "Request Body" body=""
	I1217 20:33:47.322892  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:47.323256  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:47.823109  522827 type.go:168] "Request Body" body=""
	I1217 20:33:47.823190  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:47.823487  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:48.322186  522827 type.go:168] "Request Body" body=""
	I1217 20:33:48.322263  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:48.322612  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:48.822323  522827 type.go:168] "Request Body" body=""
	I1217 20:33:48.822399  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:48.822726  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:48.822794  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:49.322196  522827 type.go:168] "Request Body" body=""
	I1217 20:33:49.322273  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:49.322588  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:49.822263  522827 type.go:168] "Request Body" body=""
	I1217 20:33:49.822348  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:49.822724  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:50.322473  522827 type.go:168] "Request Body" body=""
	I1217 20:33:50.322557  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:50.322925  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:50.822209  522827 type.go:168] "Request Body" body=""
	I1217 20:33:50.822284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:50.822560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:51.322238  522827 type.go:168] "Request Body" body=""
	I1217 20:33:51.322322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:51.322661  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:51.322714  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:51.822385  522827 type.go:168] "Request Body" body=""
	I1217 20:33:51.822483  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:51.822831  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:52.322696  522827 type.go:168] "Request Body" body=""
	I1217 20:33:52.322769  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:52.323046  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:52.823035  522827 type.go:168] "Request Body" body=""
	I1217 20:33:52.823114  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:52.823430  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:53.322170  522827 type.go:168] "Request Body" body=""
	I1217 20:33:53.322245  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:53.322568  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:53.822148  522827 type.go:168] "Request Body" body=""
	I1217 20:33:53.822225  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:53.822487  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:53.822527  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:54.322252  522827 type.go:168] "Request Body" body=""
	I1217 20:33:54.322346  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:54.322676  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:54.822391  522827 type.go:168] "Request Body" body=""
	I1217 20:33:54.822487  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:54.822807  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:55.322470  522827 type.go:168] "Request Body" body=""
	I1217 20:33:55.322551  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:55.322876  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:55.822280  522827 type.go:168] "Request Body" body=""
	I1217 20:33:55.822364  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:55.822753  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:55.822813  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:56.322272  522827 type.go:168] "Request Body" body=""
	I1217 20:33:56.322351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:56.322670  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:56.822314  522827 type.go:168] "Request Body" body=""
	I1217 20:33:56.822391  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:56.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:57.322710  522827 type.go:168] "Request Body" body=""
	I1217 20:33:57.322780  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:57.323117  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:57.822916  522827 type.go:168] "Request Body" body=""
	I1217 20:33:57.823001  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:57.823366  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:57.823421  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:58.323148  522827 type.go:168] "Request Body" body=""
	I1217 20:33:58.323218  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:58.323513  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:58.822212  522827 type.go:168] "Request Body" body=""
	I1217 20:33:58.822296  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:58.822656  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:59.322223  522827 type.go:168] "Request Body" body=""
	I1217 20:33:59.322305  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:59.322651  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:59.822221  522827 type.go:168] "Request Body" body=""
	I1217 20:33:59.822297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:59.822641  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:00.322298  522827 type.go:168] "Request Body" body=""
	I1217 20:34:00.322392  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:00.322726  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:00.322782  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:00.822577  522827 type.go:168] "Request Body" body=""
	I1217 20:34:00.822662  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:00.823038  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:01.322657  522827 type.go:168] "Request Body" body=""
	I1217 20:34:01.322731  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:01.323025  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:01.822880  522827 type.go:168] "Request Body" body=""
	I1217 20:34:01.822955  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:01.823320  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:02.323040  522827 type.go:168] "Request Body" body=""
	I1217 20:34:02.323124  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:02.323461  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:02.323514  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:02.822183  522827 type.go:168] "Request Body" body=""
	I1217 20:34:02.822254  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:02.822522  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:03.322245  522827 type.go:168] "Request Body" body=""
	I1217 20:34:03.322323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:03.322656  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:03.822270  522827 type.go:168] "Request Body" body=""
	I1217 20:34:03.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:03.822703  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:04.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:34:04.322351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:04.322622  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:04.822245  522827 type.go:168] "Request Body" body=""
	I1217 20:34:04.822344  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:04.822655  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:04.822707  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:05.322405  522827 type.go:168] "Request Body" body=""
	I1217 20:34:05.322482  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:05.322821  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:05.822296  522827 type.go:168] "Request Body" body=""
	I1217 20:34:05.822365  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:05.822641  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:06.322257  522827 type.go:168] "Request Body" body=""
	I1217 20:34:06.322357  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:06.322688  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:06.822277  522827 type.go:168] "Request Body" body=""
	I1217 20:34:06.822353  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:06.822641  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:07.322615  522827 type.go:168] "Request Body" body=""
	I1217 20:34:07.322701  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:07.322998  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:07.323048  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:07.822861  522827 type.go:168] "Request Body" body=""
	I1217 20:34:07.822938  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:07.823293  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:08.323117  522827 type.go:168] "Request Body" body=""
	I1217 20:34:08.323193  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:08.323537  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:08.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:34:08.822303  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:08.822638  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:09.322217  522827 type.go:168] "Request Body" body=""
	I1217 20:34:09.322290  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:09.322637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:09.822237  522827 type.go:168] "Request Body" body=""
	I1217 20:34:09.822317  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:09.822642  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:09.822697  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:10.322196  522827 type.go:168] "Request Body" body=""
	I1217 20:34:10.322271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:10.322575  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:10.822218  522827 type.go:168] "Request Body" body=""
	I1217 20:34:10.822302  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:10.822644  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:11.322351  522827 type.go:168] "Request Body" body=""
	I1217 20:34:11.322431  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:11.322804  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:11.822287  522827 type.go:168] "Request Body" body=""
	I1217 20:34:11.822357  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:11.822618  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:12.322611  522827 type.go:168] "Request Body" body=""
	I1217 20:34:12.322687  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:12.323025  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:12.323091  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:12.822902  522827 type.go:168] "Request Body" body=""
	I1217 20:34:12.822982  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:12.823336  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:13.323079  522827 type.go:168] "Request Body" body=""
	I1217 20:34:13.323153  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:13.323408  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:13.822161  522827 type.go:168] "Request Body" body=""
	I1217 20:34:13.822240  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:13.822575  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:14.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:34:14.322308  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:14.322650  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:14.822224  522827 type.go:168] "Request Body" body=""
	I1217 20:34:14.822298  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:14.822572  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:14.822622  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:15.322292  522827 type.go:168] "Request Body" body=""
	I1217 20:34:15.322381  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:15.322726  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:15.822430  522827 type.go:168] "Request Body" body=""
	I1217 20:34:15.822518  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:15.822853  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:16.322471  522827 type.go:168] "Request Body" body=""
	I1217 20:34:16.322546  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:16.322836  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:16.822523  522827 type.go:168] "Request Body" body=""
	I1217 20:34:16.822605  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:16.822901  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:16.822951  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:17.322790  522827 type.go:168] "Request Body" body=""
	I1217 20:34:17.322869  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:17.323207  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:17.822955  522827 type.go:168] "Request Body" body=""
	I1217 20:34:17.823029  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:17.823314  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:18.323135  522827 type.go:168] "Request Body" body=""
	I1217 20:34:18.323209  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:18.323507  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:18.822255  522827 type.go:168] "Request Body" body=""
	I1217 20:34:18.822334  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:18.822699  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:19.322387  522827 type.go:168] "Request Body" body=""
	I1217 20:34:19.322457  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:19.322762  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:19.322824  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:19.822236  522827 type.go:168] "Request Body" body=""
	I1217 20:34:19.822309  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:19.822629  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:20.322246  522827 type.go:168] "Request Body" body=""
	I1217 20:34:20.322329  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:20.322668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:20.822198  522827 type.go:168] "Request Body" body=""
	I1217 20:34:20.822275  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:20.822590  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:21.322284  522827 type.go:168] "Request Body" body=""
	I1217 20:34:21.322362  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:21.322710  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:21.822303  522827 type.go:168] "Request Body" body=""
	I1217 20:34:21.822382  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:21.822717  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:21.822772  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:22.322546  522827 type.go:168] "Request Body" body=""
	I1217 20:34:22.322615  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:22.322869  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:22.822850  522827 type.go:168] "Request Body" body=""
	I1217 20:34:22.822926  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:22.823275  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:23.323068  522827 type.go:168] "Request Body" body=""
	I1217 20:34:23.323142  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:23.323472  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:23.822173  522827 type.go:168] "Request Body" body=""
	I1217 20:34:23.822252  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:23.822565  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:24.322250  522827 type.go:168] "Request Body" body=""
	I1217 20:34:24.322333  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:24.322684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:24.322736  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:24.822303  522827 type.go:168] "Request Body" body=""
	I1217 20:34:24.822394  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:24.822738  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:25.322430  522827 type.go:168] "Request Body" body=""
	I1217 20:34:25.322506  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:25.322760  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:25.822245  522827 type.go:168] "Request Body" body=""
	I1217 20:34:25.822324  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:25.822671  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:26.322262  522827 type.go:168] "Request Body" body=""
	I1217 20:34:26.322344  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:26.322685  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:26.822350  522827 type.go:168] "Request Body" body=""
	I1217 20:34:26.822425  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:26.822723  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:26.822775  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:27.322731  522827 type.go:168] "Request Body" body=""
	I1217 20:34:27.322805  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:27.323135  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:27.822789  522827 type.go:168] "Request Body" body=""
	I1217 20:34:27.822869  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:27.823223  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:28.323014  522827 type.go:168] "Request Body" body=""
	I1217 20:34:28.323092  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:28.323358  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:28.823134  522827 type.go:168] "Request Body" body=""
	I1217 20:34:28.823222  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:28.823569  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:28.823650  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:29.322221  522827 type.go:168] "Request Body" body=""
	I1217 20:34:29.322302  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:29.322620  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:29.822198  522827 type.go:168] "Request Body" body=""
	I1217 20:34:29.822278  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:29.822544  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:30.322232  522827 type.go:168] "Request Body" body=""
	I1217 20:34:30.322310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:30.322633  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:30.822346  522827 type.go:168] "Request Body" body=""
	I1217 20:34:30.822427  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:30.822767  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:31.322434  522827 type.go:168] "Request Body" body=""
	I1217 20:34:31.322509  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:31.322812  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:31.322864  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:31.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:34:31.822308  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:31.822637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:32.322630  522827 type.go:168] "Request Body" body=""
	I1217 20:34:32.322703  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:32.323039  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:32.822905  522827 type.go:168] "Request Body" body=""
	I1217 20:34:32.822987  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:32.823335  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:33.323139  522827 type.go:168] "Request Body" body=""
	I1217 20:34:33.323215  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:33.323560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:33.323633  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:33.822242  522827 type.go:168] "Request Body" body=""
	I1217 20:34:33.822322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:33.822657  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:34.322213  522827 type.go:168] "Request Body" body=""
	I1217 20:34:34.322306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:34.322645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:34.822274  522827 type.go:168] "Request Body" body=""
	I1217 20:34:34.822351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:34.822676  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:35.322402  522827 type.go:168] "Request Body" body=""
	I1217 20:34:35.322487  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:35.322839  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:35.822515  522827 type.go:168] "Request Body" body=""
	I1217 20:34:35.822590  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:35.822930  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:35.822983  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:36.322255  522827 type.go:168] "Request Body" body=""
	I1217 20:34:36.322336  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:36.322707  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:36.822275  522827 type.go:168] "Request Body" body=""
	I1217 20:34:36.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:36.822697  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:37.322527  522827 type.go:168] "Request Body" body=""
	I1217 20:34:37.322599  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:37.322871  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:37.822227  522827 type.go:168] "Request Body" body=""
	I1217 20:34:37.822302  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:37.822644  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:38.322240  522827 type.go:168] "Request Body" body=""
	I1217 20:34:38.322315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:38.322686  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:38.322744  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:38.822377  522827 type.go:168] "Request Body" body=""
	I1217 20:34:38.822445  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:38.822700  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:39.322353  522827 type.go:168] "Request Body" body=""
	I1217 20:34:39.322436  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:39.322776  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:39.822486  522827 type.go:168] "Request Body" body=""
	I1217 20:34:39.822576  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:39.822923  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:40.322215  522827 type.go:168] "Request Body" body=""
	I1217 20:34:40.322285  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:40.322627  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:40.822320  522827 type.go:168] "Request Body" body=""
	I1217 20:34:40.822392  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:40.822751  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:40.822813  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:41.322501  522827 type.go:168] "Request Body" body=""
	I1217 20:34:41.322580  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:41.322864  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:41.822296  522827 type.go:168] "Request Body" body=""
	I1217 20:34:41.822379  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:41.822641  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:42.322620  522827 type.go:168] "Request Body" body=""
	I1217 20:34:42.322699  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:42.323049  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:42.822854  522827 type.go:168] "Request Body" body=""
	I1217 20:34:42.822937  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:42.823298  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:42.823352  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:43.322922  522827 type.go:168] "Request Body" body=""
	I1217 20:34:43.322997  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:43.323438  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:43.822136  522827 type.go:168] "Request Body" body=""
	I1217 20:34:43.822214  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:43.822552  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:44.322254  522827 type.go:168] "Request Body" body=""
	I1217 20:34:44.322328  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:44.322684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:44.822377  522827 type.go:168] "Request Body" body=""
	I1217 20:34:44.822446  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:44.822707  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:45.322396  522827 type.go:168] "Request Body" body=""
	I1217 20:34:45.322477  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:45.322826  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:45.322884  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:45.822533  522827 type.go:168] "Request Body" body=""
	I1217 20:34:45.822614  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:45.822967  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:46.322723  522827 type.go:168] "Request Body" body=""
	I1217 20:34:46.322799  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:46.323071  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:46.822878  522827 type.go:168] "Request Body" body=""
	I1217 20:34:46.822963  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:46.823309  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:47.322193  522827 type.go:168] "Request Body" body=""
	I1217 20:34:47.322271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:47.322594  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:47.822176  522827 type.go:168] "Request Body" body=""
	I1217 20:34:47.822253  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:47.822576  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:47.822624  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:48.322276  522827 type.go:168] "Request Body" body=""
	I1217 20:34:48.322360  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:48.322716  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:48.822204  522827 type.go:168] "Request Body" body=""
	I1217 20:34:48.822283  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:48.822652  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:49.322200  522827 type.go:168] "Request Body" body=""
	I1217 20:34:49.322273  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:49.322585  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:49.822198  522827 type.go:168] "Request Body" body=""
	I1217 20:34:49.822275  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:49.822635  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:49.822689  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:50.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:34:50.322310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:50.322638  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:50.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:34:50.822275  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:50.822586  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:51.322215  522827 type.go:168] "Request Body" body=""
	I1217 20:34:51.322292  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:51.322632  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:51.822330  522827 type.go:168] "Request Body" body=""
	I1217 20:34:51.822415  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:51.822753  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:51.822806  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:52.322585  522827 type.go:168] "Request Body" body=""
	I1217 20:34:52.322659  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:52.322934  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:52.822902  522827 type.go:168] "Request Body" body=""
	I1217 20:34:52.822975  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:52.823296  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:53.323063  522827 type.go:168] "Request Body" body=""
	I1217 20:34:53.323136  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:53.323470  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:53.822150  522827 type.go:168] "Request Body" body=""
	I1217 20:34:53.822229  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:53.822559  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:54.322241  522827 type.go:168] "Request Body" body=""
	I1217 20:34:54.322323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:54.322670  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:54.322729  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:54.822224  522827 type.go:168] "Request Body" body=""
	I1217 20:34:54.822309  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:54.822652  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:55.322322  522827 type.go:168] "Request Body" body=""
	I1217 20:34:55.322399  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:55.322652  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:55.822320  522827 type.go:168] "Request Body" body=""
	I1217 20:34:55.822399  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:55.822745  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:56.322224  522827 type.go:168] "Request Body" body=""
	I1217 20:34:56.322302  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:56.322656  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:56.822139  522827 type.go:168] "Request Body" body=""
	I1217 20:34:56.822217  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:56.822522  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:56.822571  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:57.322502  522827 type.go:168] "Request Body" body=""
	I1217 20:34:57.322575  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:57.322903  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:57.822521  522827 type.go:168] "Request Body" body=""
	I1217 20:34:57.822594  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:57.822915  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:58.322195  522827 type.go:168] "Request Body" body=""
	I1217 20:34:58.322269  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:58.322623  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:58.822280  522827 type.go:168] "Request Body" body=""
	I1217 20:34:58.822374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:58.822695  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:58.822745  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:59.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:34:59.322309  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:59.322669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:59.822371  522827 type.go:168] "Request Body" body=""
	I1217 20:34:59.822442  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:59.822756  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:00.322497  522827 type.go:168] "Request Body" body=""
	I1217 20:35:00.322580  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:00.322949  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:00.822996  522827 type.go:168] "Request Body" body=""
	I1217 20:35:00.823083  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:00.823467  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:00.823521  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:01.322212  522827 type.go:168] "Request Body" body=""
	I1217 20:35:01.322285  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:01.322553  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:01.822229  522827 type.go:168] "Request Body" body=""
	I1217 20:35:01.822306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:01.822627  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:02.322626  522827 type.go:168] "Request Body" body=""
	I1217 20:35:02.322709  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:02.323066  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:02.822977  522827 type.go:168] "Request Body" body=""
	I1217 20:35:02.823069  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:02.823348  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:03.323127  522827 type.go:168] "Request Body" body=""
	I1217 20:35:03.323211  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:03.323563  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:03.323642  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:03.822192  522827 type.go:168] "Request Body" body=""
	I1217 20:35:03.822280  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:03.822696  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:04.322196  522827 type.go:168] "Request Body" body=""
	I1217 20:35:04.322276  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:04.322589  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:04.822325  522827 type.go:168] "Request Body" body=""
	I1217 20:35:04.822408  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:04.822706  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:05.322377  522827 type.go:168] "Request Body" body=""
	I1217 20:35:05.322458  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:05.322803  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:05.822315  522827 type.go:168] "Request Body" body=""
	I1217 20:35:05.822428  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:05.822690  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:05.822728  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:06.322269  522827 type.go:168] "Request Body" body=""
	I1217 20:35:06.322367  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:06.322690  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:06.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:35:06.822331  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:06.822614  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:07.322502  522827 type.go:168] "Request Body" body=""
	I1217 20:35:07.322573  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:07.322817  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:07.822274  522827 type.go:168] "Request Body" body=""
	I1217 20:35:07.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:07.822698  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:07.822760  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:08.322442  522827 type.go:168] "Request Body" body=""
	I1217 20:35:08.322522  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:08.322845  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:08.822222  522827 type.go:168] "Request Body" body=""
	I1217 20:35:08.822296  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:08.822597  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:09.322251  522827 type.go:168] "Request Body" body=""
	I1217 20:35:09.322329  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:09.322650  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:09.822333  522827 type.go:168] "Request Body" body=""
	I1217 20:35:09.822408  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:09.822762  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:09.822817  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:10.322442  522827 type.go:168] "Request Body" body=""
	I1217 20:35:10.322538  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:10.322820  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:10.822275  522827 type.go:168] "Request Body" body=""
	I1217 20:35:10.822347  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:10.822634  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:11.322386  522827 type.go:168] "Request Body" body=""
	I1217 20:35:11.322464  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:11.322764  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:11.822226  522827 type.go:168] "Request Body" body=""
	I1217 20:35:11.822297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:11.822669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:12.322679  522827 type.go:168] "Request Body" body=""
	I1217 20:35:12.322763  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:12.323067  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:12.323113  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:12.822935  522827 type.go:168] "Request Body" body=""
	I1217 20:35:12.823024  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:12.823355  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:13.323128  522827 type.go:168] "Request Body" body=""
	I1217 20:35:13.323210  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:13.323540  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:13.822246  522827 type.go:168] "Request Body" body=""
	I1217 20:35:13.822355  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:13.822636  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:14.322330  522827 type.go:168] "Request Body" body=""
	I1217 20:35:14.322406  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:14.322743  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:14.822304  522827 type.go:168] "Request Body" body=""
	I1217 20:35:14.822372  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:14.822645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:14.822685  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:15.322347  522827 type.go:168] "Request Body" body=""
	I1217 20:35:15.322423  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:15.322800  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:15.822252  522827 type.go:168] "Request Body" body=""
	I1217 20:35:15.822332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:15.822662  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:16.322191  522827 type.go:168] "Request Body" body=""
	I1217 20:35:16.322260  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:16.322568  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:16.822255  522827 type.go:168] "Request Body" body=""
	I1217 20:35:16.822332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:16.822681  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:16.822743  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:17.322820  522827 type.go:168] "Request Body" body=""
	I1217 20:35:17.322896  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:17.323309  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:17.823040  522827 type.go:168] "Request Body" body=""
	I1217 20:35:17.823109  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:17.823374  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:18.323149  522827 type.go:168] "Request Body" body=""
	I1217 20:35:18.323236  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:18.323572  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:18.822287  522827 type.go:168] "Request Body" body=""
	I1217 20:35:18.822373  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:18.822708  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:18.822767  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:19.322441  522827 type.go:168] "Request Body" body=""
	I1217 20:35:19.322515  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:19.322786  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:19.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:35:19.822312  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:19.822602  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:20.322257  522827 type.go:168] "Request Body" body=""
	I1217 20:35:20.322333  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:20.322679  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:20.822345  522827 type.go:168] "Request Body" body=""
	I1217 20:35:20.822415  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:20.822676  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:21.322243  522827 type.go:168] "Request Body" body=""
	I1217 20:35:21.322326  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:21.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:21.322713  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:21.822270  522827 type.go:168] "Request Body" body=""
	I1217 20:35:21.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:21.822667  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:22.322750  522827 type.go:168] "Request Body" body=""
	I1217 20:35:22.322821  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:22.323094  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:22.823051  522827 type.go:168] "Request Body" body=""
	I1217 20:35:22.823129  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:22.823477  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:23.322217  522827 type.go:168] "Request Body" body=""
	I1217 20:35:23.322297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:23.322625  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:23.822303  522827 type.go:168] "Request Body" body=""
	I1217 20:35:23.822373  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:23.822637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:23.822680  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:24.322343  522827 type.go:168] "Request Body" body=""
	I1217 20:35:24.322422  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:24.322779  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:24.822483  522827 type.go:168] "Request Body" body=""
	I1217 20:35:24.822568  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:24.822893  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:25.322199  522827 type.go:168] "Request Body" body=""
	I1217 20:35:25.322274  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:25.322559  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:25.822235  522827 type.go:168] "Request Body" body=""
	I1217 20:35:25.822320  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:25.822638  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:26.322257  522827 type.go:168] "Request Body" body=""
	I1217 20:35:26.322337  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:26.322663  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:26.322718  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:26.822248  522827 type.go:168] "Request Body" body=""
	I1217 20:35:26.822315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:26.822587  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:27.322563  522827 type.go:168] "Request Body" body=""
	I1217 20:35:27.322640  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:27.322979  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:27.822237  522827 type.go:168] "Request Body" body=""
	I1217 20:35:27.822313  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:27.822672  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:28.322358  522827 type.go:168] "Request Body" body=""
	I1217 20:35:28.322427  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:28.322726  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:28.322768  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:28.822428  522827 type.go:168] "Request Body" body=""
	I1217 20:35:28.822502  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:28.822834  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:29.322244  522827 type.go:168] "Request Body" body=""
	I1217 20:35:29.322327  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:29.322664  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:29.822221  522827 type.go:168] "Request Body" body=""
	I1217 20:35:29.822293  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:29.822604  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:30.322244  522827 type.go:168] "Request Body" body=""
	I1217 20:35:30.322319  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:30.322684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:30.822264  522827 type.go:168] "Request Body" body=""
	I1217 20:35:30.822339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:30.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:30.822715  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:31.322385  522827 type.go:168] "Request Body" body=""
	I1217 20:35:31.322460  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:31.322798  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:31.822531  522827 type.go:168] "Request Body" body=""
	I1217 20:35:31.822610  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:31.822946  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:32.322713  522827 type.go:168] "Request Body" body=""
	I1217 20:35:32.322793  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:32.323145  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:32.822950  522827 type.go:168] "Request Body" body=""
	I1217 20:35:32.823025  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:32.823278  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:32.823318  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:33.323110  522827 type.go:168] "Request Body" body=""
	I1217 20:35:33.323192  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:33.323540  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:33.822246  522827 type.go:168] "Request Body" body=""
	I1217 20:35:33.822323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:33.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:34.322218  522827 type.go:168] "Request Body" body=""
	I1217 20:35:34.322300  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:34.322661  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:34.822268  522827 type.go:168] "Request Body" body=""
	I1217 20:35:34.822347  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:34.822680  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:35.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:35:35.322307  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:35.322640  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:35.322702  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:35.822209  522827 type.go:168] "Request Body" body=""
	I1217 20:35:35.822278  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:35.822595  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:36.322264  522827 type.go:168] "Request Body" body=""
	I1217 20:35:36.322343  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:36.322666  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:36.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:35:36.822306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:36.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:37.322496  522827 type.go:168] "Request Body" body=""
	I1217 20:35:37.322571  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:37.322824  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:37.322862  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:37.822509  522827 type.go:168] "Request Body" body=""
	I1217 20:35:37.822586  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:37.822928  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:38.322513  522827 type.go:168] "Request Body" body=""
	I1217 20:35:38.322595  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:38.323137  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:38.822886  522827 type.go:168] "Request Body" body=""
	I1217 20:35:38.822959  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:38.823295  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:39.323106  522827 type.go:168] "Request Body" body=""
	I1217 20:35:39.323188  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:39.323560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:39.323633  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:39.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:35:39.822276  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:39.822619  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:40.322173  522827 type.go:168] "Request Body" body=""
	I1217 20:35:40.322246  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:40.322545  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:40.822242  522827 type.go:168] "Request Body" body=""
	I1217 20:35:40.822317  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:40.822754  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:41.322470  522827 type.go:168] "Request Body" body=""
	I1217 20:35:41.322556  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:41.322901  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:41.822209  522827 type.go:168] "Request Body" body=""
	I1217 20:35:41.822282  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:41.822536  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:41.822583  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:42.322519  522827 type.go:168] "Request Body" body=""
	I1217 20:35:42.322603  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:42.322998  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:42.822247  522827 type.go:168] "Request Body" body=""
	I1217 20:35:42.822336  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:42.822693  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:43.322188  522827 type.go:168] "Request Body" body=""
	I1217 20:35:43.322249  522827 node_ready.go:38] duration metric: took 6m0.000239045s for node "functional-655452" to be "Ready" ...
	I1217 20:35:43.325291  522827 out.go:203] 
	W1217 20:35:43.328188  522827 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 20:35:43.328206  522827 out.go:285] * 
	W1217 20:35:43.330331  522827 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:35:43.333111  522827 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.480906996Z" level=info msg="Using the internal default seccomp profile"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.480914348Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.480920182Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.480926287Z" level=info msg="RDT not available in the host system"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.480942771Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.481715201Z" level=info msg="Conmon does support the --sync option"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.481735672Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.481751648Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.482467921Z" level=info msg="Conmon does support the --sync option"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.482494301Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.482628103Z" level=info msg="Updated default CNI network name to "
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.483177475Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.483642644Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.48371238Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.540730063Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.540938081Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.54099651Z" level=info msg="Create NRI interface"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.541126464Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.541145295Z" level=info msg="runtime interface created"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.541159761Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.541167408Z" level=info msg="runtime interface starting up..."
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.541173546Z" level=info msg="starting plugins..."
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.541188307Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.541273649Z" level=info msg="No systemd watchdog enabled"
	Dec 17 20:29:40 functional-655452 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:35:45.416271    8690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:35:45.417040    8690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:35:45.418596    8690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:35:45.419141    8690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:35:45.420540    8690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 19:02] hrtimer: interrupt took 71042327 ns
	[Dec17 20:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 20:12] overlayfs: idmapped layers are currently not supported
	[  +0.080288] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec17 20:17] overlayfs: idmapped layers are currently not supported
	[Dec17 20:18] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:35:45 up  3:18,  0 user,  load average: 0.26, 0.29, 0.89
	Linux functional-655452 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 20:35:42 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:35:43 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1136.
	Dec 17 20:35:43 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:43 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:43 functional-655452 kubelet[8581]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:43 functional-655452 kubelet[8581]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:43 functional-655452 kubelet[8581]: E1217 20:35:43.392454    8581 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:35:43 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:35:43 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:35:44 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1137.
	Dec 17 20:35:44 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:44 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:44 functional-655452 kubelet[8587]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:44 functional-655452 kubelet[8587]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:44 functional-655452 kubelet[8587]: E1217 20:35:44.134060    8587 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:35:44 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:35:44 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:35:44 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1138.
	Dec 17 20:35:44 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:44 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:44 functional-655452 kubelet[8608]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:44 functional-655452 kubelet[8608]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:44 functional-655452 kubelet[8608]: E1217 20:35:44.885307    8608 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:35:44 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:35:44 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452: exit status 2 (359.096728ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-655452" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (369.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (2.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-655452 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-655452 get po -A: exit status 1 (59.135361ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-655452 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-655452 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-655452 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-655452
helpers_test.go:244: (dbg) docker inspect functional-655452:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	        "Created": "2025-12-17T20:21:23.127111799Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 517339,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:21:23.205943598Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hosts",
	        "LogPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f-json.log",
	        "Name": "/functional-655452",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-655452:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-655452",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	                "LowerDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8-init/diff:/var/lib/docker/overlay2/c4759b344ecb109c83d66e4ef56c76903f1d1e597efb86ab2d2c7911b1130a8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-655452",
	                "Source": "/var/lib/docker/volumes/functional-655452/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-655452",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-655452",
	                "name.minikube.sigs.k8s.io": "functional-655452",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "77ae7dbcf69b3201be4ce65a88d235dbad078142318e01a5939425f9766fb924",
	            "SandboxKey": "/var/run/docker/netns/77ae7dbcf69b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-655452": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:c6:98:b5:58:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b68cd6e69ccdb81b0870dd976e2d5401ca696480842d2c469293121d59435cfb",
	                    "EndpointID": "82926bb4b08b5f66131c1026a8bc72a582e9cb8c6d97a278a494965923f47cb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-655452",
	                        "ce7a6a54d14f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452: exit status 2 (309.110236ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-655452 logs -n 25: (1.092675625s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-643319 ssh findmnt -T /mount-9p | grep 9p                                                                                            │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ ssh            │ functional-643319 ssh findmnt -T /mount-9p | grep 9p                                                                                            │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ ssh            │ functional-643319 ssh -- ls -la /mount-9p                                                                                                       │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ ssh            │ functional-643319 ssh sudo umount -f /mount-9p                                                                                                  │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ mount          │ -p functional-643319 /tmp/TestFunctionalparallelMountCmdVerifyCleanup781964025/001:/mount2 --alsologtostderr -v=1                               │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ mount          │ -p functional-643319 /tmp/TestFunctionalparallelMountCmdVerifyCleanup781964025/001:/mount1 --alsologtostderr -v=1                               │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ mount          │ -p functional-643319 /tmp/TestFunctionalparallelMountCmdVerifyCleanup781964025/001:/mount3 --alsologtostderr -v=1                               │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ ssh            │ functional-643319 ssh findmnt -T /mount1                                                                                                        │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ ssh            │ functional-643319 ssh findmnt -T /mount1                                                                                                        │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ ssh            │ functional-643319 ssh findmnt -T /mount2                                                                                                        │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ ssh            │ functional-643319 ssh findmnt -T /mount3                                                                                                        │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ mount          │ -p functional-643319 --kill=true                                                                                                                │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ update-context │ functional-643319 update-context --alsologtostderr -v=2                                                                                         │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ update-context │ functional-643319 update-context --alsologtostderr -v=2                                                                                         │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ update-context │ functional-643319 update-context --alsologtostderr -v=2                                                                                         │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image          │ functional-643319 image ls --format short --alsologtostderr                                                                                     │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image          │ functional-643319 image ls --format yaml --alsologtostderr                                                                                      │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ ssh            │ functional-643319 ssh pgrep buildkitd                                                                                                           │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ image          │ functional-643319 image build -t localhost/my-image:functional-643319 testdata/build --alsologtostderr                                          │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image          │ functional-643319 image ls --format json --alsologtostderr                                                                                      │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image          │ functional-643319 image ls --format table --alsologtostderr                                                                                     │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image          │ functional-643319 image ls                                                                                                                      │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ delete         │ -p functional-643319                                                                                                                            │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ start          │ -p functional-655452 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ start          │ -p functional-655452 --alsologtostderr -v=8                                                                                                     │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:29 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:29:37
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:29:37.230217  522827 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:29:37.230338  522827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:29:37.230348  522827 out.go:374] Setting ErrFile to fd 2...
	I1217 20:29:37.230354  522827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:29:37.230641  522827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:29:37.231040  522827 out.go:368] Setting JSON to false
	I1217 20:29:37.231956  522827 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11527,"bootTime":1765991851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:29:37.232033  522827 start.go:143] virtualization:  
	I1217 20:29:37.235360  522827 out.go:179] * [functional-655452] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:29:37.239166  522827 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:29:37.239533  522827 notify.go:221] Checking for updates...
	I1217 20:29:37.245507  522827 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:29:37.248369  522827 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:37.251209  522827 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:29:37.254179  522827 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:29:37.257129  522827 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:29:37.260562  522827 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:29:37.260726  522827 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:29:37.289208  522827 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:29:37.289391  522827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:29:37.344995  522827 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 20:29:37.33566048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:29:37.345107  522827 docker.go:319] overlay module found
	I1217 20:29:37.348246  522827 out.go:179] * Using the docker driver based on existing profile
	I1217 20:29:37.351193  522827 start.go:309] selected driver: docker
	I1217 20:29:37.351220  522827 start.go:927] validating driver "docker" against &{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:29:37.351378  522827 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:29:37.351479  522827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:29:37.406404  522827 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 20:29:37.397152083 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:29:37.406839  522827 cni.go:84] Creating CNI manager for ""
	I1217 20:29:37.406903  522827 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:29:37.406958  522827 start.go:353] cluster config:
	{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:29:37.410074  522827 out.go:179] * Starting "functional-655452" primary control-plane node in "functional-655452" cluster
	I1217 20:29:37.413044  522827 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:29:37.415960  522827 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:29:37.418922  522827 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:29:37.418997  522827 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1217 20:29:37.419012  522827 cache.go:65] Caching tarball of preloaded images
	I1217 20:29:37.419028  522827 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:29:37.419099  522827 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:29:37.419110  522827 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 20:29:37.419218  522827 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/config.json ...
	I1217 20:29:37.438883  522827 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:29:37.438908  522827 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:29:37.438929  522827 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:29:37.438964  522827 start.go:360] acquireMachinesLock for functional-655452: {Name:mk8b480106227149e1b79da3c4f580b9dc5e0f96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:29:37.439024  522827 start.go:364] duration metric: took 37.399µs to acquireMachinesLock for "functional-655452"
	I1217 20:29:37.439047  522827 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:29:37.439057  522827 fix.go:54] fixHost starting: 
	I1217 20:29:37.439341  522827 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:29:37.456072  522827 fix.go:112] recreateIfNeeded on functional-655452: state=Running err=<nil>
	W1217 20:29:37.456113  522827 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:29:37.459179  522827 out.go:252] * Updating the running docker "functional-655452" container ...
	I1217 20:29:37.459210  522827 machine.go:94] provisionDockerMachine start ...
	I1217 20:29:37.459290  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:37.476101  522827 main.go:143] libmachine: Using SSH client type: native
	I1217 20:29:37.476449  522827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:29:37.476466  522827 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:29:37.607148  522827 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-655452
	
	I1217 20:29:37.607176  522827 ubuntu.go:182] provisioning hostname "functional-655452"
	I1217 20:29:37.607253  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:37.625523  522827 main.go:143] libmachine: Using SSH client type: native
	I1217 20:29:37.625850  522827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:29:37.625869  522827 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-655452 && echo "functional-655452" | sudo tee /etc/hostname
	I1217 20:29:37.765012  522827 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-655452
	
	I1217 20:29:37.765095  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:37.783574  522827 main.go:143] libmachine: Using SSH client type: native
	I1217 20:29:37.784233  522827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:29:37.784256  522827 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-655452' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-655452/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-655452' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:29:37.923858  522827 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:29:37.923885  522827 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:29:37.923918  522827 ubuntu.go:190] setting up certificates
	I1217 20:29:37.923930  522827 provision.go:84] configureAuth start
	I1217 20:29:37.923995  522827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-655452
	I1217 20:29:37.942198  522827 provision.go:143] copyHostCerts
	I1217 20:29:37.942245  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:29:37.942294  522827 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:29:37.942308  522827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:29:37.942385  522827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:29:37.942483  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:29:37.942506  522827 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:29:37.942510  522827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:29:37.942538  522827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:29:37.942584  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:29:37.942605  522827 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:29:37.942613  522827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:29:37.942638  522827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:29:37.942696  522827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.functional-655452 san=[127.0.0.1 192.168.49.2 functional-655452 localhost minikube]
	I1217 20:29:38.205373  522827 provision.go:177] copyRemoteCerts
	I1217 20:29:38.205444  522827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:29:38.205488  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:38.222940  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:38.324557  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 20:29:38.324643  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:29:38.342369  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 20:29:38.342442  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 20:29:38.361702  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 20:29:38.361816  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:29:38.379229  522827 provision.go:87] duration metric: took 455.281269ms to configureAuth
	I1217 20:29:38.379306  522827 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:29:38.379506  522827 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:29:38.379650  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:38.397098  522827 main.go:143] libmachine: Using SSH client type: native
	I1217 20:29:38.397425  522827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:29:38.397449  522827 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:29:38.710104  522827 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:29:38.710129  522827 machine.go:97] duration metric: took 1.250909554s to provisionDockerMachine
	I1217 20:29:38.710141  522827 start.go:293] postStartSetup for "functional-655452" (driver="docker")
	I1217 20:29:38.710173  522827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:29:38.710243  522827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:29:38.710290  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:38.729105  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:38.823561  522827 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:29:38.826921  522827 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 20:29:38.826944  522827 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 20:29:38.826949  522827 command_runner.go:130] > VERSION_ID="12"
	I1217 20:29:38.826954  522827 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 20:29:38.826958  522827 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 20:29:38.826962  522827 command_runner.go:130] > ID=debian
	I1217 20:29:38.826966  522827 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 20:29:38.826971  522827 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 20:29:38.826976  522827 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 20:29:38.827033  522827 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:29:38.827056  522827 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:29:38.827068  522827 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:29:38.827127  522827 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:29:38.827213  522827 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:29:38.827224  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /etc/ssl/certs/4884122.pem
	I1217 20:29:38.827310  522827 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts -> hosts in /etc/test/nested/copy/488412
	I1217 20:29:38.827318  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts -> /etc/test/nested/copy/488412/hosts
	I1217 20:29:38.827361  522827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/488412
	I1217 20:29:38.835073  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:29:38.853051  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts --> /etc/test/nested/copy/488412/hosts (40 bytes)
	I1217 20:29:38.870277  522827 start.go:296] duration metric: took 160.119138ms for postStartSetup
	I1217 20:29:38.870416  522827 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:29:38.870497  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:38.887313  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:38.980667  522827 command_runner.go:130] > 14%
	I1217 20:29:38.980748  522827 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:29:38.985147  522827 command_runner.go:130] > 169G
	I1217 20:29:38.985687  522827 fix.go:56] duration metric: took 1.546626529s for fixHost
	I1217 20:29:38.985712  522827 start.go:83] releasing machines lock for "functional-655452", held for 1.546675825s
	I1217 20:29:38.985789  522827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-655452
	I1217 20:29:39.004882  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:29:39.004958  522827 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:29:39.004969  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:29:39.005005  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:29:39.005049  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:29:39.005073  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:29:39.005126  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:29:39.005177  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.005197  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.005217  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.005238  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:29:39.005294  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:39.023309  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:39.128919  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:29:39.146238  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:29:39.163663  522827 ssh_runner.go:195] Run: openssl version
	I1217 20:29:39.169395  522827 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 20:29:39.169821  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.177042  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:29:39.184227  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.187671  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.187835  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.187899  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.232645  522827 command_runner.go:130] > 51391683
	I1217 20:29:39.233156  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:29:39.240764  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.248070  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:29:39.256139  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.260468  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.260613  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.260717  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.301324  522827 command_runner.go:130] > 3ec20f2e
	I1217 20:29:39.301774  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:29:39.309564  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.316908  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:29:39.330430  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.334931  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.335647  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.335725  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.377554  522827 command_runner.go:130] > b5213941
	I1217 20:29:39.378955  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:29:39.389619  522827 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:29:39.393257  522827 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 20:29:39.396841  522827 ssh_runner.go:195] Run: cat /version.json
	I1217 20:29:39.396923  522827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:29:39.487006  522827 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1217 20:29:39.489563  522827 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 20:29:39.489734  522827 ssh_runner.go:195] Run: systemctl --version
	I1217 20:29:39.495686  522827 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 20:29:39.495789  522827 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 20:29:39.496199  522827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:29:39.531768  522827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 20:29:39.536045  522827 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 20:29:39.536498  522827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:29:39.536609  522827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:29:39.544584  522827 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:29:39.544609  522827 start.go:496] detecting cgroup driver to use...
	I1217 20:29:39.544639  522827 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:29:39.544686  522827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:29:39.559677  522827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:29:39.572537  522827 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:29:39.572629  522827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:29:39.588063  522827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:29:39.601417  522827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:29:39.711338  522827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:29:39.828534  522827 docker.go:234] disabling docker service ...
	I1217 20:29:39.828602  522827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:29:39.843450  522827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:29:39.856661  522827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:29:39.988443  522827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:29:40.133139  522827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:29:40.147217  522827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:29:40.161697  522827 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1217 20:29:40.163096  522827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:29:40.163182  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.173178  522827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:29:40.173338  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.182803  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.192168  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.201463  522827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:29:40.209602  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.218600  522827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.227088  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.236327  522827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:29:40.243154  522827 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 20:29:40.244193  522827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:29:40.251635  522827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:29:40.361488  522827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:29:40.546740  522827 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:29:40.546847  522827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:29:40.551021  522827 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1217 20:29:40.551089  522827 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 20:29:40.551102  522827 command_runner.go:130] > Device: 0,72	Inode: 1636        Links: 1
	I1217 20:29:40.551127  522827 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 20:29:40.551137  522827 command_runner.go:130] > Access: 2025-12-17 20:29:40.478294705 +0000
	I1217 20:29:40.551143  522827 command_runner.go:130] > Modify: 2025-12-17 20:29:40.478294705 +0000
	I1217 20:29:40.551149  522827 command_runner.go:130] > Change: 2025-12-17 20:29:40.478294705 +0000
	I1217 20:29:40.551152  522827 command_runner.go:130] >  Birth: -
	I1217 20:29:40.551189  522827 start.go:564] Will wait 60s for crictl version
	I1217 20:29:40.551247  522827 ssh_runner.go:195] Run: which crictl
	I1217 20:29:40.554786  522827 command_runner.go:130] > /usr/local/bin/crictl
	I1217 20:29:40.554923  522827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:29:40.577444  522827 command_runner.go:130] > Version:  0.1.0
	I1217 20:29:40.577470  522827 command_runner.go:130] > RuntimeName:  cri-o
	I1217 20:29:40.577476  522827 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1217 20:29:40.577491  522827 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 20:29:40.579694  522827 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:29:40.579819  522827 ssh_runner.go:195] Run: crio --version
	I1217 20:29:40.609324  522827 command_runner.go:130] > crio version 1.34.3
	I1217 20:29:40.609350  522827 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1217 20:29:40.609357  522827 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1217 20:29:40.609362  522827 command_runner.go:130] >    GitTreeState:   dirty
	I1217 20:29:40.609367  522827 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1217 20:29:40.609371  522827 command_runner.go:130] >    GoVersion:      go1.24.6
	I1217 20:29:40.609375  522827 command_runner.go:130] >    Compiler:       gc
	I1217 20:29:40.609382  522827 command_runner.go:130] >    Platform:       linux/arm64
	I1217 20:29:40.609386  522827 command_runner.go:130] >    Linkmode:       static
	I1217 20:29:40.609390  522827 command_runner.go:130] >    BuildTags:
	I1217 20:29:40.609393  522827 command_runner.go:130] >      static
	I1217 20:29:40.609397  522827 command_runner.go:130] >      netgo
	I1217 20:29:40.609401  522827 command_runner.go:130] >      osusergo
	I1217 20:29:40.609410  522827 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1217 20:29:40.609414  522827 command_runner.go:130] >      seccomp
	I1217 20:29:40.609421  522827 command_runner.go:130] >      apparmor
	I1217 20:29:40.609424  522827 command_runner.go:130] >      selinux
	I1217 20:29:40.609429  522827 command_runner.go:130] >    LDFlags:          unknown
	I1217 20:29:40.609433  522827 command_runner.go:130] >    SeccompEnabled:   true
	I1217 20:29:40.609441  522827 command_runner.go:130] >    AppArmorEnabled:  false
	I1217 20:29:40.609527  522827 ssh_runner.go:195] Run: crio --version
	I1217 20:29:40.638467  522827 command_runner.go:130] > crio version 1.34.3
	I1217 20:29:40.638491  522827 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1217 20:29:40.638499  522827 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1217 20:29:40.638505  522827 command_runner.go:130] >    GitTreeState:   dirty
	I1217 20:29:40.638509  522827 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1217 20:29:40.638516  522827 command_runner.go:130] >    GoVersion:      go1.24.6
	I1217 20:29:40.638520  522827 command_runner.go:130] >    Compiler:       gc
	I1217 20:29:40.638533  522827 command_runner.go:130] >    Platform:       linux/arm64
	I1217 20:29:40.638543  522827 command_runner.go:130] >    Linkmode:       static
	I1217 20:29:40.638547  522827 command_runner.go:130] >    BuildTags:
	I1217 20:29:40.638550  522827 command_runner.go:130] >      static
	I1217 20:29:40.638554  522827 command_runner.go:130] >      netgo
	I1217 20:29:40.638558  522827 command_runner.go:130] >      osusergo
	I1217 20:29:40.638568  522827 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1217 20:29:40.638572  522827 command_runner.go:130] >      seccomp
	I1217 20:29:40.638576  522827 command_runner.go:130] >      apparmor
	I1217 20:29:40.638583  522827 command_runner.go:130] >      selinux
	I1217 20:29:40.638587  522827 command_runner.go:130] >    LDFlags:          unknown
	I1217 20:29:40.638592  522827 command_runner.go:130] >    SeccompEnabled:   true
	I1217 20:29:40.638604  522827 command_runner.go:130] >    AppArmorEnabled:  false
	I1217 20:29:40.644077  522827 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 20:29:40.647046  522827 cli_runner.go:164] Run: docker network inspect functional-655452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:29:40.665190  522827 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:29:40.669398  522827 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1217 20:29:40.669593  522827 kubeadm.go:884] updating cluster {Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:29:40.669700  522827 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:29:40.669779  522827 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:29:40.704282  522827 command_runner.go:130] > {
	I1217 20:29:40.704302  522827 command_runner.go:130] >   "images":  [
	I1217 20:29:40.704307  522827 command_runner.go:130] >     {
	I1217 20:29:40.704316  522827 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 20:29:40.704321  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704328  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 20:29:40.704331  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704335  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704350  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1217 20:29:40.704362  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1217 20:29:40.704370  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704374  522827 command_runner.go:130] >       "size":  "111333938",
	I1217 20:29:40.704379  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704389  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704403  522827 command_runner.go:130] >     },
	I1217 20:29:40.704406  522827 command_runner.go:130] >     {
	I1217 20:29:40.704413  522827 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 20:29:40.704419  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704425  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 20:29:40.704429  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704433  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704445  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1217 20:29:40.704454  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 20:29:40.704460  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704464  522827 command_runner.go:130] >       "size":  "29037500",
	I1217 20:29:40.704468  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704476  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704482  522827 command_runner.go:130] >     },
	I1217 20:29:40.704485  522827 command_runner.go:130] >     {
	I1217 20:29:40.704494  522827 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 20:29:40.704503  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704509  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 20:29:40.704512  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704516  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704528  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1217 20:29:40.704536  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1217 20:29:40.704542  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704547  522827 command_runner.go:130] >       "size":  "74491780",
	I1217 20:29:40.704551  522827 command_runner.go:130] >       "username":  "nonroot",
	I1217 20:29:40.704556  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704561  522827 command_runner.go:130] >     },
	I1217 20:29:40.704568  522827 command_runner.go:130] >     {
	I1217 20:29:40.704579  522827 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1217 20:29:40.704583  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704588  522827 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1217 20:29:40.704594  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704598  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704605  522827 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890",
	I1217 20:29:40.704613  522827 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"
	I1217 20:29:40.704619  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704623  522827 command_runner.go:130] >       "size":  "60850387",
	I1217 20:29:40.704626  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.704630  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.704636  522827 command_runner.go:130] >       },
	I1217 20:29:40.704645  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704657  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704660  522827 command_runner.go:130] >     },
	I1217 20:29:40.704664  522827 command_runner.go:130] >     {
	I1217 20:29:40.704673  522827 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1217 20:29:40.704679  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704685  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1217 20:29:40.704689  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704693  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704704  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee",
	I1217 20:29:40.704721  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"
	I1217 20:29:40.704724  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704729  522827 command_runner.go:130] >       "size":  "85015535",
	I1217 20:29:40.704735  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.704739  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.704742  522827 command_runner.go:130] >       },
	I1217 20:29:40.704746  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704753  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704756  522827 command_runner.go:130] >     },
	I1217 20:29:40.704759  522827 command_runner.go:130] >     {
	I1217 20:29:40.704772  522827 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1217 20:29:40.704779  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704785  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1217 20:29:40.704788  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704793  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704803  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f",
	I1217 20:29:40.704813  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1217 20:29:40.704822  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704827  522827 command_runner.go:130] >       "size":  "72170325",
	I1217 20:29:40.704831  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.704835  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.704838  522827 command_runner.go:130] >       },
	I1217 20:29:40.704842  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704846  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704848  522827 command_runner.go:130] >     },
	I1217 20:29:40.704851  522827 command_runner.go:130] >     {
	I1217 20:29:40.704858  522827 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1217 20:29:40.704861  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704866  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1217 20:29:40.704870  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704875  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704883  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f",
	I1217 20:29:40.704894  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1217 20:29:40.704898  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704903  522827 command_runner.go:130] >       "size":  "74107287",
	I1217 20:29:40.704910  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704914  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704926  522827 command_runner.go:130] >     },
	I1217 20:29:40.704930  522827 command_runner.go:130] >     {
	I1217 20:29:40.704936  522827 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1217 20:29:40.704940  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704946  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1217 20:29:40.704949  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704963  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704975  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3",
	I1217 20:29:40.704993  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"
	I1217 20:29:40.705000  522827 command_runner.go:130] >       ],
	I1217 20:29:40.705005  522827 command_runner.go:130] >       "size":  "49822549",
	I1217 20:29:40.705008  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.705014  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.705017  522827 command_runner.go:130] >       },
	I1217 20:29:40.705025  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.705029  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.705033  522827 command_runner.go:130] >     },
	I1217 20:29:40.705036  522827 command_runner.go:130] >     {
	I1217 20:29:40.705043  522827 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 20:29:40.705055  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.705060  522827 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 20:29:40.705063  522827 command_runner.go:130] >       ],
	I1217 20:29:40.705068  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.705078  522827 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1217 20:29:40.705089  522827 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1217 20:29:40.705094  522827 command_runner.go:130] >       ],
	I1217 20:29:40.705097  522827 command_runner.go:130] >       "size":  "519884",
	I1217 20:29:40.705101  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.705108  522827 command_runner.go:130] >         "value":  "65535"
	I1217 20:29:40.705111  522827 command_runner.go:130] >       },
	I1217 20:29:40.705115  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.705119  522827 command_runner.go:130] >       "pinned":  true
	I1217 20:29:40.705128  522827 command_runner.go:130] >     }
	I1217 20:29:40.705133  522827 command_runner.go:130] >   ]
	I1217 20:29:40.705136  522827 command_runner.go:130] > }
	I1217 20:29:40.705310  522827 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:29:40.705323  522827 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:29:40.705384  522827 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:29:40.728606  522827 command_runner.go:130] > {
	I1217 20:29:40.728624  522827 command_runner.go:130] >   "images":  [
	I1217 20:29:40.728629  522827 command_runner.go:130] >     {
	I1217 20:29:40.728638  522827 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 20:29:40.728643  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728657  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 20:29:40.728665  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728669  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728678  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1217 20:29:40.728686  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1217 20:29:40.728689  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728694  522827 command_runner.go:130] >       "size":  "111333938",
	I1217 20:29:40.728698  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.728705  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728708  522827 command_runner.go:130] >     },
	I1217 20:29:40.728711  522827 command_runner.go:130] >     {
	I1217 20:29:40.728718  522827 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 20:29:40.728726  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728731  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 20:29:40.728735  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728739  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728747  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1217 20:29:40.728756  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 20:29:40.728759  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728763  522827 command_runner.go:130] >       "size":  "29037500",
	I1217 20:29:40.728767  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.728774  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728778  522827 command_runner.go:130] >     },
	I1217 20:29:40.728781  522827 command_runner.go:130] >     {
	I1217 20:29:40.728789  522827 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 20:29:40.728793  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728798  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 20:29:40.728801  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728805  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728813  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1217 20:29:40.728821  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1217 20:29:40.728824  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728829  522827 command_runner.go:130] >       "size":  "74491780",
	I1217 20:29:40.728833  522827 command_runner.go:130] >       "username":  "nonroot",
	I1217 20:29:40.728840  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728843  522827 command_runner.go:130] >     },
	I1217 20:29:40.728846  522827 command_runner.go:130] >     {
	I1217 20:29:40.728853  522827 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1217 20:29:40.728857  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728862  522827 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1217 20:29:40.728866  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728870  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728877  522827 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890",
	I1217 20:29:40.728887  522827 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"
	I1217 20:29:40.728890  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728894  522827 command_runner.go:130] >       "size":  "60850387",
	I1217 20:29:40.728898  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.728902  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.728904  522827 command_runner.go:130] >       },
	I1217 20:29:40.728913  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.728917  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728920  522827 command_runner.go:130] >     },
	I1217 20:29:40.728924  522827 command_runner.go:130] >     {
	I1217 20:29:40.728930  522827 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1217 20:29:40.728934  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728939  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1217 20:29:40.728943  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728946  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728954  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee",
	I1217 20:29:40.728962  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"
	I1217 20:29:40.728965  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728969  522827 command_runner.go:130] >       "size":  "85015535",
	I1217 20:29:40.728972  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.728976  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.728979  522827 command_runner.go:130] >       },
	I1217 20:29:40.728983  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.728986  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728996  522827 command_runner.go:130] >     },
	I1217 20:29:40.728999  522827 command_runner.go:130] >     {
	I1217 20:29:40.729006  522827 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1217 20:29:40.729009  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.729015  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1217 20:29:40.729018  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729022  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.729031  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f",
	I1217 20:29:40.729039  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1217 20:29:40.729042  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729046  522827 command_runner.go:130] >       "size":  "72170325",
	I1217 20:29:40.729049  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.729053  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.729056  522827 command_runner.go:130] >       },
	I1217 20:29:40.729060  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.729064  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.729067  522827 command_runner.go:130] >     },
	I1217 20:29:40.729070  522827 command_runner.go:130] >     {
	I1217 20:29:40.729076  522827 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1217 20:29:40.729081  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.729086  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1217 20:29:40.729089  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729093  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.729100  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f",
	I1217 20:29:40.729108  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1217 20:29:40.729111  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729115  522827 command_runner.go:130] >       "size":  "74107287",
	I1217 20:29:40.729119  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.729123  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.729125  522827 command_runner.go:130] >     },
	I1217 20:29:40.729128  522827 command_runner.go:130] >     {
	I1217 20:29:40.729135  522827 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1217 20:29:40.729138  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.729147  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1217 20:29:40.729150  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729154  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.729163  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3",
	I1217 20:29:40.729180  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"
	I1217 20:29:40.729183  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729187  522827 command_runner.go:130] >       "size":  "49822549",
	I1217 20:29:40.729191  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.729195  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.729198  522827 command_runner.go:130] >       },
	I1217 20:29:40.729202  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.729205  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.729208  522827 command_runner.go:130] >     },
	I1217 20:29:40.729212  522827 command_runner.go:130] >     {
	I1217 20:29:40.729218  522827 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 20:29:40.729221  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.729225  522827 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 20:29:40.729228  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729232  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.729239  522827 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1217 20:29:40.729246  522827 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1217 20:29:40.729249  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729253  522827 command_runner.go:130] >       "size":  "519884",
	I1217 20:29:40.729256  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.729260  522827 command_runner.go:130] >         "value":  "65535"
	I1217 20:29:40.729263  522827 command_runner.go:130] >       },
	I1217 20:29:40.729267  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.729271  522827 command_runner.go:130] >       "pinned":  true
	I1217 20:29:40.729274  522827 command_runner.go:130] >     }
	I1217 20:29:40.729276  522827 command_runner.go:130] >   ]
	I1217 20:29:40.729279  522827 command_runner.go:130] > }
	I1217 20:29:40.730532  522827 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:29:40.730563  522827 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:29:40.730572  522827 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1217 20:29:40.730679  522827 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-655452 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:29:40.730767  522827 ssh_runner.go:195] Run: crio config
	I1217 20:29:40.759067  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.758680307Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1217 20:29:40.759091  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.758877363Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1217 20:29:40.759355  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.759160664Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1217 20:29:40.759513  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.75929148Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1217 20:29:40.759764  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.759610703Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.760178  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.759978034Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1217 20:29:40.781892  522827 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1217 20:29:40.789853  522827 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1217 20:29:40.789886  522827 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1217 20:29:40.789894  522827 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1217 20:29:40.789897  522827 command_runner.go:130] > #
	I1217 20:29:40.789905  522827 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1217 20:29:40.789911  522827 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1217 20:29:40.789918  522827 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1217 20:29:40.789927  522827 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1217 20:29:40.789931  522827 command_runner.go:130] > # reload'.
	I1217 20:29:40.789938  522827 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1217 20:29:40.789949  522827 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1217 20:29:40.789959  522827 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1217 20:29:40.789965  522827 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1217 20:29:40.789972  522827 command_runner.go:130] > [crio]
	I1217 20:29:40.789978  522827 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1217 20:29:40.789983  522827 command_runner.go:130] > # containers images, in this directory.
	I1217 20:29:40.789993  522827 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1217 20:29:40.790003  522827 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1217 20:29:40.790008  522827 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1217 20:29:40.790017  522827 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1217 20:29:40.790024  522827 command_runner.go:130] > # imagestore = ""
	I1217 20:29:40.790038  522827 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1217 20:29:40.790048  522827 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1217 20:29:40.790053  522827 command_runner.go:130] > # storage_driver = "overlay"
	I1217 20:29:40.790058  522827 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1217 20:29:40.790065  522827 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1217 20:29:40.790069  522827 command_runner.go:130] > # storage_option = [
	I1217 20:29:40.790073  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790079  522827 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1217 20:29:40.790092  522827 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1217 20:29:40.790100  522827 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1217 20:29:40.790106  522827 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1217 20:29:40.790112  522827 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1217 20:29:40.790119  522827 command_runner.go:130] > # always happen on a node reboot
	I1217 20:29:40.790124  522827 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1217 20:29:40.790139  522827 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1217 20:29:40.790152  522827 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1217 20:29:40.790158  522827 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1217 20:29:40.790162  522827 command_runner.go:130] > # version_file_persist = ""
	I1217 20:29:40.790170  522827 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1217 20:29:40.790180  522827 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1217 20:29:40.790184  522827 command_runner.go:130] > # internal_wipe = true
	I1217 20:29:40.790193  522827 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1217 20:29:40.790202  522827 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1217 20:29:40.790206  522827 command_runner.go:130] > # internal_repair = true
	I1217 20:29:40.790211  522827 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1217 20:29:40.790219  522827 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1217 20:29:40.790226  522827 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1217 20:29:40.790232  522827 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1217 20:29:40.790241  522827 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1217 20:29:40.790251  522827 command_runner.go:130] > [crio.api]
	I1217 20:29:40.790257  522827 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1217 20:29:40.790262  522827 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1217 20:29:40.790271  522827 command_runner.go:130] > # IP address on which the stream server will listen.
	I1217 20:29:40.790278  522827 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1217 20:29:40.790285  522827 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1217 20:29:40.790290  522827 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1217 20:29:40.790297  522827 command_runner.go:130] > # stream_port = "0"
	I1217 20:29:40.790302  522827 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1217 20:29:40.790307  522827 command_runner.go:130] > # stream_enable_tls = false
	I1217 20:29:40.790313  522827 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1217 20:29:40.790320  522827 command_runner.go:130] > # stream_idle_timeout = ""
	I1217 20:29:40.790330  522827 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1217 20:29:40.790339  522827 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1217 20:29:40.790343  522827 command_runner.go:130] > # stream_tls_cert = ""
	I1217 20:29:40.790349  522827 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1217 20:29:40.790357  522827 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1217 20:29:40.790361  522827 command_runner.go:130] > # stream_tls_key = ""
	I1217 20:29:40.790367  522827 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1217 20:29:40.790377  522827 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1217 20:29:40.790382  522827 command_runner.go:130] > # automatically pick up the changes.
	I1217 20:29:40.790385  522827 command_runner.go:130] > # stream_tls_ca = ""
	I1217 20:29:40.790402  522827 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1217 20:29:40.790415  522827 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1217 20:29:40.790423  522827 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1217 20:29:40.790428  522827 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1217 20:29:40.790437  522827 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1217 20:29:40.790443  522827 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1217 20:29:40.790447  522827 command_runner.go:130] > [crio.runtime]
	I1217 20:29:40.790455  522827 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1217 20:29:40.790465  522827 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1217 20:29:40.790470  522827 command_runner.go:130] > # "nofile=1024:2048"
	I1217 20:29:40.790476  522827 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1217 20:29:40.790480  522827 command_runner.go:130] > # default_ulimits = [
	I1217 20:29:40.790486  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790493  522827 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1217 20:29:40.790499  522827 command_runner.go:130] > # no_pivot = false
	I1217 20:29:40.790505  522827 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1217 20:29:40.790511  522827 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1217 20:29:40.790518  522827 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1217 20:29:40.790525  522827 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1217 20:29:40.790530  522827 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1217 20:29:40.790539  522827 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1217 20:29:40.790543  522827 command_runner.go:130] > # conmon = ""
	I1217 20:29:40.790547  522827 command_runner.go:130] > # Cgroup setting for conmon
	I1217 20:29:40.790558  522827 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1217 20:29:40.790563  522827 command_runner.go:130] > conmon_cgroup = "pod"
	I1217 20:29:40.790572  522827 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1217 20:29:40.790585  522827 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1217 20:29:40.790592  522827 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1217 20:29:40.790603  522827 command_runner.go:130] > # conmon_env = [
	I1217 20:29:40.790606  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790611  522827 command_runner.go:130] > # Additional environment variables to set for all the
	I1217 20:29:40.790621  522827 command_runner.go:130] > # containers. These are overridden if set in the
	I1217 20:29:40.790627  522827 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1217 20:29:40.790631  522827 command_runner.go:130] > # default_env = [
	I1217 20:29:40.790634  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790639  522827 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1217 20:29:40.790647  522827 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1217 20:29:40.790653  522827 command_runner.go:130] > # selinux = false
	I1217 20:29:40.790660  522827 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1217 20:29:40.790675  522827 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1217 20:29:40.790682  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.790691  522827 command_runner.go:130] > # seccomp_profile = ""
	I1217 20:29:40.790698  522827 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1217 20:29:40.790703  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.790707  522827 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1217 20:29:40.790717  522827 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1217 20:29:40.790723  522827 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1217 20:29:40.790730  522827 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1217 20:29:40.790738  522827 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1217 20:29:40.790744  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.790751  522827 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1217 20:29:40.790757  522827 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1217 20:29:40.790761  522827 command_runner.go:130] > # the cgroup blockio controller.
	I1217 20:29:40.790765  522827 command_runner.go:130] > # blockio_config_file = ""
	I1217 20:29:40.790774  522827 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1217 20:29:40.790780  522827 command_runner.go:130] > # blockio parameters.
	I1217 20:29:40.790790  522827 command_runner.go:130] > # blockio_reload = false
	I1217 20:29:40.790796  522827 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1217 20:29:40.790800  522827 command_runner.go:130] > # irqbalance daemon.
	I1217 20:29:40.790805  522827 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1217 20:29:40.790814  522827 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1217 20:29:40.790828  522827 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1217 20:29:40.790836  522827 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1217 20:29:40.790845  522827 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1217 20:29:40.790852  522827 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1217 20:29:40.790859  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.790863  522827 command_runner.go:130] > # rdt_config_file = ""
	I1217 20:29:40.790869  522827 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1217 20:29:40.790873  522827 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1217 20:29:40.790881  522827 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1217 20:29:40.790885  522827 command_runner.go:130] > # separate_pull_cgroup = ""
	I1217 20:29:40.790892  522827 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1217 20:29:40.790900  522827 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1217 20:29:40.790904  522827 command_runner.go:130] > # will be added.
	I1217 20:29:40.790908  522827 command_runner.go:130] > # default_capabilities = [
	I1217 20:29:40.790920  522827 command_runner.go:130] > # 	"CHOWN",
	I1217 20:29:40.790924  522827 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1217 20:29:40.790927  522827 command_runner.go:130] > # 	"FSETID",
	I1217 20:29:40.790930  522827 command_runner.go:130] > # 	"FOWNER",
	I1217 20:29:40.790940  522827 command_runner.go:130] > # 	"SETGID",
	I1217 20:29:40.790944  522827 command_runner.go:130] > # 	"SETUID",
	I1217 20:29:40.790963  522827 command_runner.go:130] > # 	"SETPCAP",
	I1217 20:29:40.790971  522827 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1217 20:29:40.790975  522827 command_runner.go:130] > # 	"KILL",
	I1217 20:29:40.790977  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790985  522827 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1217 20:29:40.790992  522827 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1217 20:29:40.790999  522827 command_runner.go:130] > # add_inheritable_capabilities = false
	I1217 20:29:40.791005  522827 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1217 20:29:40.791018  522827 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1217 20:29:40.791023  522827 command_runner.go:130] > default_sysctls = [
	I1217 20:29:40.791030  522827 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1217 20:29:40.791033  522827 command_runner.go:130] > ]
	I1217 20:29:40.791038  522827 command_runner.go:130] > # List of devices on the host that a
	I1217 20:29:40.791044  522827 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1217 20:29:40.791048  522827 command_runner.go:130] > # allowed_devices = [
	I1217 20:29:40.791055  522827 command_runner.go:130] > # 	"/dev/fuse",
	I1217 20:29:40.791059  522827 command_runner.go:130] > # 	"/dev/net/tun",
	I1217 20:29:40.791062  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791067  522827 command_runner.go:130] > # List of additional devices. specified as
	I1217 20:29:40.791081  522827 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1217 20:29:40.791088  522827 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1217 20:29:40.791096  522827 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1217 20:29:40.791103  522827 command_runner.go:130] > # additional_devices = [
	I1217 20:29:40.791110  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791115  522827 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1217 20:29:40.791119  522827 command_runner.go:130] > # cdi_spec_dirs = [
	I1217 20:29:40.791122  522827 command_runner.go:130] > # 	"/etc/cdi",
	I1217 20:29:40.791126  522827 command_runner.go:130] > # 	"/var/run/cdi",
	I1217 20:29:40.791130  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791136  522827 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1217 20:29:40.791144  522827 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1217 20:29:40.791149  522827 command_runner.go:130] > # Defaults to false.
	I1217 20:29:40.791156  522827 command_runner.go:130] > # device_ownership_from_security_context = false
	I1217 20:29:40.791164  522827 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1217 20:29:40.791178  522827 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1217 20:29:40.791181  522827 command_runner.go:130] > # hooks_dir = [
	I1217 20:29:40.791186  522827 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1217 20:29:40.791189  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791195  522827 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1217 20:29:40.791205  522827 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1217 20:29:40.791210  522827 command_runner.go:130] > # its default mounts from the following two files:
	I1217 20:29:40.791220  522827 command_runner.go:130] > #
	I1217 20:29:40.791229  522827 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1217 20:29:40.791240  522827 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1217 20:29:40.791248  522827 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1217 20:29:40.791251  522827 command_runner.go:130] > #
	I1217 20:29:40.791257  522827 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1217 20:29:40.791274  522827 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1217 20:29:40.791280  522827 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1217 20:29:40.791285  522827 command_runner.go:130] > #      only add mounts it finds in this file.
	I1217 20:29:40.791288  522827 command_runner.go:130] > #
	I1217 20:29:40.791292  522827 command_runner.go:130] > # default_mounts_file = ""
	I1217 20:29:40.791301  522827 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1217 20:29:40.791316  522827 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1217 20:29:40.791320  522827 command_runner.go:130] > # pids_limit = -1
	I1217 20:29:40.791326  522827 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1217 20:29:40.791335  522827 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1217 20:29:40.791343  522827 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1217 20:29:40.791354  522827 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1217 20:29:40.791357  522827 command_runner.go:130] > # log_size_max = -1
	I1217 20:29:40.791364  522827 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1217 20:29:40.791368  522827 command_runner.go:130] > # log_to_journald = false
	I1217 20:29:40.791374  522827 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1217 20:29:40.791383  522827 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1217 20:29:40.791391  522827 command_runner.go:130] > # Path to directory for container attach sockets.
	I1217 20:29:40.791396  522827 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1217 20:29:40.791401  522827 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1217 20:29:40.791405  522827 command_runner.go:130] > # bind_mount_prefix = ""
	I1217 20:29:40.791417  522827 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1217 20:29:40.791421  522827 command_runner.go:130] > # read_only = false
	I1217 20:29:40.791427  522827 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1217 20:29:40.791437  522827 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1217 20:29:40.791441  522827 command_runner.go:130] > # live configuration reload.
	I1217 20:29:40.791445  522827 command_runner.go:130] > # log_level = "info"
	I1217 20:29:40.791454  522827 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1217 20:29:40.791460  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.791466  522827 command_runner.go:130] > # log_filter = ""
	I1217 20:29:40.791472  522827 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1217 20:29:40.791481  522827 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1217 20:29:40.791485  522827 command_runner.go:130] > # separated by comma.
	I1217 20:29:40.791493  522827 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 20:29:40.791497  522827 command_runner.go:130] > # uid_mappings = ""
	I1217 20:29:40.791506  522827 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1217 20:29:40.791518  522827 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1217 20:29:40.791523  522827 command_runner.go:130] > # separated by comma.
	I1217 20:29:40.791530  522827 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 20:29:40.791535  522827 command_runner.go:130] > # gid_mappings = ""
	I1217 20:29:40.791540  522827 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1217 20:29:40.791549  522827 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1217 20:29:40.791556  522827 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1217 20:29:40.791565  522827 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 20:29:40.791572  522827 command_runner.go:130] > # minimum_mappable_uid = -1
	I1217 20:29:40.791604  522827 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1217 20:29:40.791611  522827 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1217 20:29:40.791617  522827 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1217 20:29:40.791627  522827 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 20:29:40.791634  522827 command_runner.go:130] > # minimum_mappable_gid = -1
	I1217 20:29:40.791640  522827 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1217 20:29:40.791648  522827 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1217 20:29:40.791662  522827 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1217 20:29:40.791666  522827 command_runner.go:130] > # ctr_stop_timeout = 30
	I1217 20:29:40.791672  522827 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1217 20:29:40.791680  522827 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1217 20:29:40.791685  522827 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1217 20:29:40.791690  522827 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1217 20:29:40.791694  522827 command_runner.go:130] > # drop_infra_ctr = true
	I1217 20:29:40.791700  522827 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1217 20:29:40.791712  522827 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1217 20:29:40.791723  522827 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1217 20:29:40.791727  522827 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1217 20:29:40.791734  522827 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1217 20:29:40.791743  522827 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1217 20:29:40.791749  522827 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1217 20:29:40.791756  522827 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1217 20:29:40.791760  522827 command_runner.go:130] > # shared_cpuset = ""
	I1217 20:29:40.791766  522827 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1217 20:29:40.791773  522827 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1217 20:29:40.791777  522827 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1217 20:29:40.791784  522827 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1217 20:29:40.791795  522827 command_runner.go:130] > # pinns_path = ""
	I1217 20:29:40.791801  522827 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1217 20:29:40.791807  522827 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1217 20:29:40.791814  522827 command_runner.go:130] > # enable_criu_support = true
	I1217 20:29:40.791819  522827 command_runner.go:130] > # Enable/disable the generation of the container,
	I1217 20:29:40.791826  522827 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1217 20:29:40.791833  522827 command_runner.go:130] > # enable_pod_events = false
	I1217 20:29:40.791839  522827 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1217 20:29:40.791845  522827 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1217 20:29:40.791849  522827 command_runner.go:130] > # default_runtime = "crun"
	I1217 20:29:40.791857  522827 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1217 20:29:40.791865  522827 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1217 20:29:40.791874  522827 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1217 20:29:40.791887  522827 command_runner.go:130] > # creation as a file is not desired either.
	I1217 20:29:40.791896  522827 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1217 20:29:40.791903  522827 command_runner.go:130] > # the hostname is being managed dynamically.
	I1217 20:29:40.791910  522827 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1217 20:29:40.791914  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791920  522827 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1217 20:29:40.791929  522827 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1217 20:29:40.791935  522827 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1217 20:29:40.791943  522827 command_runner.go:130] > # Each entry in the table should follow the format:
	I1217 20:29:40.791946  522827 command_runner.go:130] > #
	I1217 20:29:40.791951  522827 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1217 20:29:40.791958  522827 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1217 20:29:40.791964  522827 command_runner.go:130] > # runtime_type = "oci"
	I1217 20:29:40.791969  522827 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1217 20:29:40.791976  522827 command_runner.go:130] > # inherit_default_runtime = false
	I1217 20:29:40.791981  522827 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1217 20:29:40.791986  522827 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1217 20:29:40.791990  522827 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1217 20:29:40.791996  522827 command_runner.go:130] > # monitor_env = []
	I1217 20:29:40.792001  522827 command_runner.go:130] > # privileged_without_host_devices = false
	I1217 20:29:40.792008  522827 command_runner.go:130] > # allowed_annotations = []
	I1217 20:29:40.792014  522827 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1217 20:29:40.792017  522827 command_runner.go:130] > # no_sync_log = false
	I1217 20:29:40.792021  522827 command_runner.go:130] > # default_annotations = {}
	I1217 20:29:40.792028  522827 command_runner.go:130] > # stream_websockets = false
	I1217 20:29:40.792034  522827 command_runner.go:130] > # seccomp_profile = ""
	I1217 20:29:40.792066  522827 command_runner.go:130] > # Where:
	I1217 20:29:40.792076  522827 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1217 20:29:40.792083  522827 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1217 20:29:40.792090  522827 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1217 20:29:40.792098  522827 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1217 20:29:40.792102  522827 command_runner.go:130] > #   in $PATH.
	I1217 20:29:40.792108  522827 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1217 20:29:40.792113  522827 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1217 20:29:40.792122  522827 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1217 20:29:40.792128  522827 command_runner.go:130] > #   state.
	I1217 20:29:40.792134  522827 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1217 20:29:40.792143  522827 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1217 20:29:40.792149  522827 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1217 20:29:40.792155  522827 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1217 20:29:40.792163  522827 command_runner.go:130] > #   the values from the default runtime on load time.
	I1217 20:29:40.792174  522827 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1217 20:29:40.792183  522827 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1217 20:29:40.792190  522827 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1217 20:29:40.792199  522827 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1217 20:29:40.792207  522827 command_runner.go:130] > #   The currently recognized values are:
	I1217 20:29:40.792214  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1217 20:29:40.792222  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1217 20:29:40.792231  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1217 20:29:40.792237  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1217 20:29:40.792251  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1217 20:29:40.792260  522827 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1217 20:29:40.792270  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1217 20:29:40.792277  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1217 20:29:40.792284  522827 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1217 20:29:40.792293  522827 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1217 20:29:40.792309  522827 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1217 20:29:40.792316  522827 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1217 20:29:40.792322  522827 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1217 20:29:40.792331  522827 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1217 20:29:40.792337  522827 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1217 20:29:40.792345  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1217 20:29:40.792353  522827 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1217 20:29:40.792358  522827 command_runner.go:130] > #   deprecated option "conmon".
	I1217 20:29:40.792367  522827 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1217 20:29:40.792380  522827 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1217 20:29:40.792387  522827 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1217 20:29:40.792392  522827 command_runner.go:130] > #   should be moved to the container's cgroup
	I1217 20:29:40.792405  522827 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1217 20:29:40.792410  522827 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1217 20:29:40.792420  522827 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1217 20:29:40.792424  522827 command_runner.go:130] > #   conmon-rs by using:
	I1217 20:29:40.792432  522827 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1217 20:29:40.792441  522827 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1217 20:29:40.792454  522827 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1217 20:29:40.792465  522827 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1217 20:29:40.792471  522827 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1217 20:29:40.792485  522827 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1217 20:29:40.792497  522827 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1217 20:29:40.792506  522827 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1217 20:29:40.792515  522827 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1217 20:29:40.792524  522827 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1217 20:29:40.792529  522827 command_runner.go:130] > #   when a machine crash happens.
	I1217 20:29:40.792536  522827 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1217 20:29:40.792546  522827 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1217 20:29:40.792558  522827 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1217 20:29:40.792562  522827 command_runner.go:130] > #   seccomp profile for the runtime.
	I1217 20:29:40.792568  522827 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1217 20:29:40.792579  522827 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1217 20:29:40.792582  522827 command_runner.go:130] > #
	I1217 20:29:40.792587  522827 command_runner.go:130] > # Using the seccomp notifier feature:
	I1217 20:29:40.792590  522827 command_runner.go:130] > #
	I1217 20:29:40.792596  522827 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1217 20:29:40.792605  522827 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1217 20:29:40.792608  522827 command_runner.go:130] > #
	I1217 20:29:40.792615  522827 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1217 20:29:40.792630  522827 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1217 20:29:40.792633  522827 command_runner.go:130] > #
	I1217 20:29:40.792642  522827 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1217 20:29:40.792649  522827 command_runner.go:130] > # feature.
	I1217 20:29:40.792652  522827 command_runner.go:130] > #
	I1217 20:29:40.792658  522827 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1217 20:29:40.792667  522827 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1217 20:29:40.792673  522827 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1217 20:29:40.792679  522827 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1217 20:29:40.792688  522827 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1217 20:29:40.792692  522827 command_runner.go:130] > #
	I1217 20:29:40.792702  522827 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1217 20:29:40.792711  522827 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1217 20:29:40.792715  522827 command_runner.go:130] > #
	I1217 20:29:40.792721  522827 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1217 20:29:40.792727  522827 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1217 20:29:40.792732  522827 command_runner.go:130] > #
	I1217 20:29:40.792738  522827 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1217 20:29:40.792744  522827 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1217 20:29:40.792750  522827 command_runner.go:130] > # limitation.
	I1217 20:29:40.792754  522827 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1217 20:29:40.792758  522827 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1217 20:29:40.792761  522827 command_runner.go:130] > runtime_type = ""
	I1217 20:29:40.792765  522827 command_runner.go:130] > runtime_root = "/run/crun"
	I1217 20:29:40.792769  522827 command_runner.go:130] > inherit_default_runtime = false
	I1217 20:29:40.792774  522827 command_runner.go:130] > runtime_config_path = ""
	I1217 20:29:40.792781  522827 command_runner.go:130] > container_min_memory = ""
	I1217 20:29:40.792785  522827 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1217 20:29:40.792796  522827 command_runner.go:130] > monitor_cgroup = "pod"
	I1217 20:29:40.792801  522827 command_runner.go:130] > monitor_exec_cgroup = ""
	I1217 20:29:40.792804  522827 command_runner.go:130] > allowed_annotations = [
	I1217 20:29:40.792809  522827 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1217 20:29:40.792814  522827 command_runner.go:130] > ]
	I1217 20:29:40.792819  522827 command_runner.go:130] > privileged_without_host_devices = false
	I1217 20:29:40.792823  522827 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1217 20:29:40.792828  522827 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1217 20:29:40.792834  522827 command_runner.go:130] > runtime_type = ""
	I1217 20:29:40.792839  522827 command_runner.go:130] > runtime_root = "/run/runc"
	I1217 20:29:40.792842  522827 command_runner.go:130] > inherit_default_runtime = false
	I1217 20:29:40.792846  522827 command_runner.go:130] > runtime_config_path = ""
	I1217 20:29:40.792850  522827 command_runner.go:130] > container_min_memory = ""
	I1217 20:29:40.792856  522827 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1217 20:29:40.792860  522827 command_runner.go:130] > monitor_cgroup = "pod"
	I1217 20:29:40.792864  522827 command_runner.go:130] > monitor_exec_cgroup = ""
	I1217 20:29:40.792875  522827 command_runner.go:130] > privileged_without_host_devices = false
	I1217 20:29:40.792884  522827 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1217 20:29:40.792890  522827 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1217 20:29:40.792896  522827 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1217 20:29:40.792907  522827 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1217 20:29:40.792918  522827 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1217 20:29:40.792930  522827 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1217 20:29:40.792940  522827 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1217 20:29:40.792947  522827 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1217 20:29:40.792958  522827 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1217 20:29:40.792975  522827 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1217 20:29:40.792980  522827 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1217 20:29:40.792998  522827 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1217 20:29:40.793004  522827 command_runner.go:130] > # Example:
	I1217 20:29:40.793009  522827 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1217 20:29:40.793014  522827 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1217 20:29:40.793019  522827 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1217 20:29:40.793025  522827 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1217 20:29:40.793029  522827 command_runner.go:130] > # cpuset = "0-1"
	I1217 20:29:40.793033  522827 command_runner.go:130] > # cpushares = "5"
	I1217 20:29:40.793039  522827 command_runner.go:130] > # cpuquota = "1000"
	I1217 20:29:40.793043  522827 command_runner.go:130] > # cpuperiod = "100000"
	I1217 20:29:40.793050  522827 command_runner.go:130] > # cpulimit = "35"
	I1217 20:29:40.793059  522827 command_runner.go:130] > # Where:
	I1217 20:29:40.793066  522827 command_runner.go:130] > # The workload name is workload-type.
	I1217 20:29:40.793073  522827 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1217 20:29:40.793079  522827 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1217 20:29:40.793087  522827 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1217 20:29:40.793096  522827 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1217 20:29:40.793101  522827 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1217 20:29:40.793106  522827 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1217 20:29:40.793116  522827 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1217 20:29:40.793122  522827 command_runner.go:130] > # Default value is set to true
	I1217 20:29:40.793132  522827 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1217 20:29:40.793141  522827 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1217 20:29:40.793146  522827 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1217 20:29:40.793150  522827 command_runner.go:130] > # Default value is set to 'false'
	I1217 20:29:40.793155  522827 command_runner.go:130] > # disable_hostport_mapping = false
	I1217 20:29:40.793163  522827 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1217 20:29:40.793172  522827 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1217 20:29:40.793175  522827 command_runner.go:130] > # timezone = ""
	I1217 20:29:40.793185  522827 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1217 20:29:40.793188  522827 command_runner.go:130] > #
	I1217 20:29:40.793194  522827 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1217 20:29:40.793212  522827 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1217 20:29:40.793215  522827 command_runner.go:130] > [crio.image]
	I1217 20:29:40.793222  522827 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1217 20:29:40.793229  522827 command_runner.go:130] > # default_transport = "docker://"
	I1217 20:29:40.793236  522827 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1217 20:29:40.793243  522827 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1217 20:29:40.793249  522827 command_runner.go:130] > # global_auth_file = ""
	I1217 20:29:40.793255  522827 command_runner.go:130] > # The image used to instantiate infra containers.
	I1217 20:29:40.793260  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.793264  522827 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1217 20:29:40.793271  522827 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1217 20:29:40.793277  522827 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1217 20:29:40.793283  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.793289  522827 command_runner.go:130] > # pause_image_auth_file = ""
	I1217 20:29:40.793295  522827 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1217 20:29:40.793304  522827 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1217 20:29:40.793311  522827 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1217 20:29:40.793317  522827 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1217 20:29:40.793323  522827 command_runner.go:130] > # pause_command = "/pause"
	I1217 20:29:40.793329  522827 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1217 20:29:40.793335  522827 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1217 20:29:40.793342  522827 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1217 20:29:40.793351  522827 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1217 20:29:40.793357  522827 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1217 20:29:40.793372  522827 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1217 20:29:40.793376  522827 command_runner.go:130] > # pinned_images = [
	I1217 20:29:40.793379  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793388  522827 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1217 20:29:40.793401  522827 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1217 20:29:40.793408  522827 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1217 20:29:40.793416  522827 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1217 20:29:40.793422  522827 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1217 20:29:40.793426  522827 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1217 20:29:40.793432  522827 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1217 20:29:40.793439  522827 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1217 20:29:40.793445  522827 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1217 20:29:40.793456  522827 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1217 20:29:40.793462  522827 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1217 20:29:40.793467  522827 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1217 20:29:40.793473  522827 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1217 20:29:40.793479  522827 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1217 20:29:40.793483  522827 command_runner.go:130] > # changing them here.
	I1217 20:29:40.793488  522827 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1217 20:29:40.793492  522827 command_runner.go:130] > # insecure_registries = [
	I1217 20:29:40.793495  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793514  522827 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1217 20:29:40.793522  522827 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1217 20:29:40.793526  522827 command_runner.go:130] > # image_volumes = "mkdir"
	I1217 20:29:40.793532  522827 command_runner.go:130] > # Temporary directory to use for storing big files
	I1217 20:29:40.793538  522827 command_runner.go:130] > # big_files_temporary_dir = ""
	I1217 20:29:40.793544  522827 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1217 20:29:40.793554  522827 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1217 20:29:40.793558  522827 command_runner.go:130] > # auto_reload_registries = false
	I1217 20:29:40.793564  522827 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1217 20:29:40.793572  522827 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1217 20:29:40.793584  522827 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1217 20:29:40.793589  522827 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1217 20:29:40.793594  522827 command_runner.go:130] > # The mode of short name resolution.
	I1217 20:29:40.793600  522827 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1217 20:29:40.793607  522827 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1217 20:29:40.793613  522827 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1217 20:29:40.793624  522827 command_runner.go:130] > # short_name_mode = "enforcing"
	I1217 20:29:40.793631  522827 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1217 20:29:40.793636  522827 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1217 20:29:40.793643  522827 command_runner.go:130] > # oci_artifact_mount_support = true
	I1217 20:29:40.793649  522827 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1217 20:29:40.793653  522827 command_runner.go:130] > # CNI plugins.
	I1217 20:29:40.793662  522827 command_runner.go:130] > [crio.network]
	I1217 20:29:40.793669  522827 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1217 20:29:40.793674  522827 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1217 20:29:40.793678  522827 command_runner.go:130] > # cni_default_network = ""
	I1217 20:29:40.793683  522827 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1217 20:29:40.793688  522827 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1217 20:29:40.793695  522827 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1217 20:29:40.793701  522827 command_runner.go:130] > # plugin_dirs = [
	I1217 20:29:40.793705  522827 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1217 20:29:40.793708  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793712  522827 command_runner.go:130] > # List of included pod metrics.
	I1217 20:29:40.793716  522827 command_runner.go:130] > # included_pod_metrics = [
	I1217 20:29:40.793721  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793727  522827 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1217 20:29:40.793733  522827 command_runner.go:130] > [crio.metrics]
	I1217 20:29:40.793738  522827 command_runner.go:130] > # Globally enable or disable metrics support.
	I1217 20:29:40.793742  522827 command_runner.go:130] > # enable_metrics = false
	I1217 20:29:40.793749  522827 command_runner.go:130] > # Specify enabled metrics collectors.
	I1217 20:29:40.793754  522827 command_runner.go:130] > # Per default all metrics are enabled.
	I1217 20:29:40.793760  522827 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1217 20:29:40.793769  522827 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1217 20:29:40.793781  522827 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1217 20:29:40.793788  522827 command_runner.go:130] > # metrics_collectors = [
	I1217 20:29:40.793792  522827 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1217 20:29:40.793796  522827 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1217 20:29:40.793801  522827 command_runner.go:130] > # 	"containers_oom_total",
	I1217 20:29:40.793810  522827 command_runner.go:130] > # 	"processes_defunct",
	I1217 20:29:40.793814  522827 command_runner.go:130] > # 	"operations_total",
	I1217 20:29:40.793818  522827 command_runner.go:130] > # 	"operations_latency_seconds",
	I1217 20:29:40.793825  522827 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1217 20:29:40.793830  522827 command_runner.go:130] > # 	"operations_errors_total",
	I1217 20:29:40.793834  522827 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1217 20:29:40.793838  522827 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1217 20:29:40.793843  522827 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1217 20:29:40.793847  522827 command_runner.go:130] > # 	"image_pulls_success_total",
	I1217 20:29:40.793851  522827 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1217 20:29:40.793857  522827 command_runner.go:130] > # 	"containers_oom_count_total",
	I1217 20:29:40.793862  522827 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1217 20:29:40.793869  522827 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1217 20:29:40.793873  522827 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1217 20:29:40.793876  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793882  522827 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1217 20:29:40.793888  522827 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1217 20:29:40.793894  522827 command_runner.go:130] > # The port on which the metrics server will listen.
	I1217 20:29:40.793898  522827 command_runner.go:130] > # metrics_port = 9090
	I1217 20:29:40.793905  522827 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1217 20:29:40.793909  522827 command_runner.go:130] > # metrics_socket = ""
	I1217 20:29:40.793920  522827 command_runner.go:130] > # The certificate for the secure metrics server.
	I1217 20:29:40.793926  522827 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1217 20:29:40.793932  522827 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1217 20:29:40.793939  522827 command_runner.go:130] > # certificate on any modification event.
	I1217 20:29:40.793942  522827 command_runner.go:130] > # metrics_cert = ""
	I1217 20:29:40.793947  522827 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1217 20:29:40.793959  522827 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1217 20:29:40.793967  522827 command_runner.go:130] > # metrics_key = ""
	I1217 20:29:40.793980  522827 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1217 20:29:40.793983  522827 command_runner.go:130] > [crio.tracing]
	I1217 20:29:40.793989  522827 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1217 20:29:40.793996  522827 command_runner.go:130] > # enable_tracing = false
	I1217 20:29:40.794002  522827 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1217 20:29:40.794006  522827 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1217 20:29:40.794015  522827 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1217 20:29:40.794020  522827 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1217 20:29:40.794024  522827 command_runner.go:130] > # CRI-O NRI configuration.
	I1217 20:29:40.794027  522827 command_runner.go:130] > [crio.nri]
	I1217 20:29:40.794031  522827 command_runner.go:130] > # Globally enable or disable NRI.
	I1217 20:29:40.794035  522827 command_runner.go:130] > # enable_nri = true
	I1217 20:29:40.794039  522827 command_runner.go:130] > # NRI socket to listen on.
	I1217 20:29:40.794045  522827 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1217 20:29:40.794050  522827 command_runner.go:130] > # NRI plugin directory to use.
	I1217 20:29:40.794061  522827 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1217 20:29:40.794066  522827 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1217 20:29:40.794073  522827 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1217 20:29:40.794082  522827 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1217 20:29:40.794150  522827 command_runner.go:130] > # nri_disable_connections = false
	I1217 20:29:40.794172  522827 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1217 20:29:40.794178  522827 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1217 20:29:40.794186  522827 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1217 20:29:40.794191  522827 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1217 20:29:40.794200  522827 command_runner.go:130] > # NRI default validator configuration.
	I1217 20:29:40.794211  522827 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1217 20:29:40.794218  522827 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1217 20:29:40.794225  522827 command_runner.go:130] > # can be restricted/rejected:
	I1217 20:29:40.794229  522827 command_runner.go:130] > # - OCI hook injection
	I1217 20:29:40.794235  522827 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1217 20:29:40.794240  522827 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1217 20:29:40.794245  522827 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1217 20:29:40.794252  522827 command_runner.go:130] > # - adjustment of linux namespaces
	I1217 20:29:40.794263  522827 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1217 20:29:40.794277  522827 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1217 20:29:40.794284  522827 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1217 20:29:40.794295  522827 command_runner.go:130] > #
	I1217 20:29:40.794299  522827 command_runner.go:130] > # [crio.nri.default_validator]
	I1217 20:29:40.794304  522827 command_runner.go:130] > # nri_enable_default_validator = false
	I1217 20:29:40.794312  522827 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1217 20:29:40.794318  522827 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1217 20:29:40.794326  522827 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1217 20:29:40.794338  522827 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1217 20:29:40.794343  522827 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1217 20:29:40.794347  522827 command_runner.go:130] > # nri_validator_required_plugins = [
	I1217 20:29:40.794352  522827 command_runner.go:130] > # ]
	I1217 20:29:40.794359  522827 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1217 20:29:40.794368  522827 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1217 20:29:40.794373  522827 command_runner.go:130] > [crio.stats]
	I1217 20:29:40.794386  522827 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1217 20:29:40.794392  522827 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1217 20:29:40.794398  522827 command_runner.go:130] > # stats_collection_period = 0
	I1217 20:29:40.794405  522827 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1217 20:29:40.794411  522827 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1217 20:29:40.794417  522827 command_runner.go:130] > # collection_period = 0
	I1217 20:29:40.794552  522827 cni.go:84] Creating CNI manager for ""
	I1217 20:29:40.794571  522827 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:29:40.794583  522827 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:29:40.794609  522827 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-655452 NodeName:functional-655452 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:29:40.794745  522827 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-655452"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:29:40.794827  522827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 20:29:40.802768  522827 command_runner.go:130] > kubeadm
	I1217 20:29:40.802789  522827 command_runner.go:130] > kubectl
	I1217 20:29:40.802794  522827 command_runner.go:130] > kubelet
	I1217 20:29:40.802809  522827 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:29:40.802895  522827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:29:40.810641  522827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 20:29:40.826893  522827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 20:29:40.841576  522827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1217 20:29:40.856014  522827 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:29:40.859640  522827 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 20:29:40.860204  522827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:29:40.970449  522827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:29:41.821239  522827 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452 for IP: 192.168.49.2
	I1217 20:29:41.821266  522827 certs.go:195] generating shared ca certs ...
	I1217 20:29:41.821284  522827 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:29:41.821441  522827 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:29:41.821492  522827 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:29:41.821509  522827 certs.go:257] generating profile certs ...
	I1217 20:29:41.821619  522827 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key
	I1217 20:29:41.821682  522827 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key.aa95dda5
	I1217 20:29:41.821733  522827 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key
	I1217 20:29:41.821747  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 20:29:41.821765  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 20:29:41.821780  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 20:29:41.821791  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 20:29:41.821805  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 20:29:41.821817  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 20:29:41.821831  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 20:29:41.821846  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 20:29:41.821894  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:29:41.821945  522827 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:29:41.821959  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:29:41.821996  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:29:41.822031  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:29:41.822058  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:29:41.822104  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:29:41.822138  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:41.822159  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:29:41.822175  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:29:41.822802  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:29:41.845035  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:29:41.868336  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:29:41.901049  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:29:41.918871  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 20:29:41.937168  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 20:29:41.954450  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:29:41.971684  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:29:41.988884  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:29:42.008645  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:29:42.029398  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:29:42.047332  522827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:29:42.061588  522827 ssh_runner.go:195] Run: openssl version
	I1217 20:29:42.068928  522827 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 20:29:42.069476  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.078814  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:29:42.088990  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.093920  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.093987  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.094097  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.137804  522827 command_runner.go:130] > 51391683
	I1217 20:29:42.138358  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:29:42.147537  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.157061  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:29:42.166751  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.171759  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.171865  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.172010  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.222515  522827 command_runner.go:130] > 3ec20f2e
	I1217 20:29:42.222600  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:29:42.231935  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.242232  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:29:42.250913  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.255543  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.255609  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.255686  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.298361  522827 command_runner.go:130] > b5213941
	I1217 20:29:42.298457  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:29:42.307141  522827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:29:42.311232  522827 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:29:42.311338  522827 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 20:29:42.311364  522827 command_runner.go:130] > Device: 259,1	Inode: 1313050     Links: 1
	I1217 20:29:42.311390  522827 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 20:29:42.311425  522827 command_runner.go:130] > Access: 2025-12-17 20:25:34.088053460 +0000
	I1217 20:29:42.311446  522827 command_runner.go:130] > Modify: 2025-12-17 20:21:29.777917427 +0000
	I1217 20:29:42.311461  522827 command_runner.go:130] > Change: 2025-12-17 20:21:29.777917427 +0000
	I1217 20:29:42.311467  522827 command_runner.go:130] >  Birth: 2025-12-17 20:21:29.777917427 +0000
	I1217 20:29:42.311555  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:29:42.352885  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.353302  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:29:42.407045  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.407143  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:29:42.455863  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.456326  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:29:42.505636  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.506227  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:29:42.548331  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.548862  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:29:42.590705  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.591277  522827 kubeadm.go:401] StartCluster: {Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:29:42.591354  522827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:29:42.591425  522827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:29:42.618986  522827 cri.go:89] found id: ""
	I1217 20:29:42.619059  522827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:29:42.626323  522827 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 20:29:42.626347  522827 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 20:29:42.626355  522827 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 20:29:42.627403  522827 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 20:29:42.627425  522827 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 20:29:42.627476  522827 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 20:29:42.635033  522827 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:29:42.635439  522827 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-655452" does not appear in /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:42.635552  522827 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-485134/kubeconfig needs updating (will repair): [kubeconfig missing "functional-655452" cluster setting kubeconfig missing "functional-655452" context setting]
	I1217 20:29:42.635844  522827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/kubeconfig: {Name:mkc7214551fa855d6d4575d627c3ecd54291b351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:29:42.636278  522827 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:42.636437  522827 kapi.go:59] client config for functional-655452: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:29:42.636955  522827 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 20:29:42.636974  522827 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 20:29:42.636979  522827 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 20:29:42.636984  522827 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 20:29:42.636988  522827 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 20:29:42.637054  522827 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 20:29:42.637345  522827 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 20:29:42.646583  522827 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 20:29:42.646685  522827 kubeadm.go:602] duration metric: took 19.253149ms to restartPrimaryControlPlane
	I1217 20:29:42.646744  522827 kubeadm.go:403] duration metric: took 55.459532ms to StartCluster
	I1217 20:29:42.646789  522827 settings.go:142] acquiring lock: {Name:mk91fac3a58d91b836cd0701183ef7cc1e571672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:29:42.646894  522827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:42.647795  522827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/kubeconfig: {Name:mkc7214551fa855d6d4575d627c3ecd54291b351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:29:42.648137  522827 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:29:42.648371  522827 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:29:42.648423  522827 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:29:42.648485  522827 addons.go:70] Setting storage-provisioner=true in profile "functional-655452"
	I1217 20:29:42.648497  522827 addons.go:239] Setting addon storage-provisioner=true in "functional-655452"
	I1217 20:29:42.648521  522827 host.go:66] Checking if "functional-655452" exists ...
	I1217 20:29:42.648902  522827 addons.go:70] Setting default-storageclass=true in profile "functional-655452"
	I1217 20:29:42.648999  522827 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-655452"
	I1217 20:29:42.649042  522827 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:29:42.649424  522827 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:29:42.653921  522827 out.go:179] * Verifying Kubernetes components...
	I1217 20:29:42.656821  522827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:29:42.689834  522827 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:42.690004  522827 kapi.go:59] client config for functional-655452: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:29:42.690276  522827 addons.go:239] Setting addon default-storageclass=true in "functional-655452"
	I1217 20:29:42.690305  522827 host.go:66] Checking if "functional-655452" exists ...
	I1217 20:29:42.690860  522827 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:29:42.692598  522827 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 20:29:42.699772  522827 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:42.699803  522827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 20:29:42.699871  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:42.735975  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:42.743517  522827 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:42.743543  522827 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:29:42.743664  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:42.778325  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:42.848025  522827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:29:42.860324  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:42.899199  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:43.321927  522827 node_ready.go:35] waiting up to 6m0s for node "functional-655452" to be "Ready" ...
	I1217 20:29:43.322118  522827 type.go:168] "Request Body" body=""
	I1217 20:29:43.322203  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:43.322465  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.322528  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.322567  522827 retry.go:31] will retry after 172.422642ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.322648  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.322689  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.322715  522827 retry.go:31] will retry after 167.097093ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.322809  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:43.490380  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:43.496229  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:43.581353  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.581433  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.581460  522827 retry.go:31] will retry after 331.036154ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.581553  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.581605  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.581639  522827 retry.go:31] will retry after 400.38477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.822877  522827 type.go:168] "Request Body" body=""
	I1217 20:29:43.822949  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:43.823300  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:43.912722  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:43.970874  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.974629  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.974708  522827 retry.go:31] will retry after 462.319516ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.982922  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:44.044566  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:44.048683  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.048723  522827 retry.go:31] will retry after 443.115947ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.323122  522827 type.go:168] "Request Body" body=""
	I1217 20:29:44.323200  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:44.323555  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:44.437879  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:44.492501  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:44.499443  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:44.499482  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.499520  522827 retry.go:31] will retry after 1.265386144s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.551004  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:44.551045  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.551085  522827 retry.go:31] will retry after 774.139673ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.822655  522827 type.go:168] "Request Body" body=""
	I1217 20:29:44.822811  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:44.823195  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:45.323027  522827 type.go:168] "Request Body" body=""
	I1217 20:29:45.323135  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:45.323507  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:45.323621  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:45.325715  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:45.391952  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:45.395668  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:45.395750  522827 retry.go:31] will retry after 1.529541916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:45.765134  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:45.822845  522827 type.go:168] "Request Body" body=""
	I1217 20:29:45.822973  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:45.823280  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:45.823537  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:45.827173  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:45.827206  522827 retry.go:31] will retry after 637.037829ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:46.322836  522827 type.go:168] "Request Body" body=""
	I1217 20:29:46.322927  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:46.323203  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:46.464492  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:46.525009  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:46.525062  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:46.525083  522827 retry.go:31] will retry after 1.110973738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:46.822245  522827 type.go:168] "Request Body" body=""
	I1217 20:29:46.822323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:46.822662  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:46.926099  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:46.987960  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:46.988006  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:46.988028  522827 retry.go:31] will retry after 1.385710629s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:47.322640  522827 type.go:168] "Request Body" body=""
	I1217 20:29:47.322715  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:47.323041  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:47.636709  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:47.697205  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:47.697243  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:47.697264  522827 retry.go:31] will retry after 4.090194732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:47.822497  522827 type.go:168] "Request Body" body=""
	I1217 20:29:47.822589  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:47.822932  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:47.822989  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:48.322659  522827 type.go:168] "Request Body" body=""
	I1217 20:29:48.322736  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:48.323019  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:48.374352  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:48.431979  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:48.435409  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:48.435442  522827 retry.go:31] will retry after 3.099398493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:48.823142  522827 type.go:168] "Request Body" body=""
	I1217 20:29:48.823220  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:48.823522  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:49.322226  522827 type.go:168] "Request Body" body=""
	I1217 20:29:49.322316  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:49.322645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:49.822280  522827 type.go:168] "Request Body" body=""
	I1217 20:29:49.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:49.822709  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:50.322247  522827 type.go:168] "Request Body" body=""
	I1217 20:29:50.322328  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:50.322668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:50.322721  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:50.822373  522827 type.go:168] "Request Body" body=""
	I1217 20:29:50.822449  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:50.822719  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:51.322273  522827 type.go:168] "Request Body" body=""
	I1217 20:29:51.322361  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:51.322682  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:51.535119  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:51.608419  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:51.608461  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:51.608504  522827 retry.go:31] will retry after 5.948755722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:51.787984  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:51.822458  522827 type.go:168] "Request Body" body=""
	I1217 20:29:51.822538  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:51.822817  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:51.846041  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:51.846085  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:51.846105  522827 retry.go:31] will retry after 5.856724643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:52.322893  522827 type.go:168] "Request Body" body=""
	I1217 20:29:52.322982  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:52.323271  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:52.323320  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:52.822254  522827 type.go:168] "Request Body" body=""
	I1217 20:29:52.822329  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:52.822653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:53.322391  522827 type.go:168] "Request Body" body=""
	I1217 20:29:53.322479  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:53.322825  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:53.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:29:53.822273  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:53.822595  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:54.322265  522827 type.go:168] "Request Body" body=""
	I1217 20:29:54.322351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:54.322683  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:54.822243  522827 type.go:168] "Request Body" body=""
	I1217 20:29:54.822322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:54.822646  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:54.822705  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:55.322383  522827 type.go:168] "Request Body" body=""
	I1217 20:29:55.322466  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:55.322739  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:55.822262  522827 type.go:168] "Request Body" body=""
	I1217 20:29:55.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:55.822680  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:56.322404  522827 type.go:168] "Request Body" body=""
	I1217 20:29:56.322493  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:56.322874  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:56.822564  522827 type.go:168] "Request Body" body=""
	I1217 20:29:56.822678  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:56.823046  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:56.823109  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:57.322771  522827 type.go:168] "Request Body" body=""
	I1217 20:29:57.322846  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:57.323141  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:57.557506  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:57.638482  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:57.642516  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:57.642548  522827 retry.go:31] will retry after 4.405911356s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:57.703796  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:57.764881  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:57.764928  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:57.764950  522827 retry.go:31] will retry after 7.580168113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:57.823128  522827 type.go:168] "Request Body" body=""
	I1217 20:29:57.823235  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:57.823556  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:58.322216  522827 type.go:168] "Request Body" body=""
	I1217 20:29:58.322291  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:58.322579  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:58.822288  522827 type.go:168] "Request Body" body=""
	I1217 20:29:58.822434  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:58.822838  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:59.322555  522827 type.go:168] "Request Body" body=""
	I1217 20:29:59.322632  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:59.322948  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:59.323004  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:59.822770  522827 type.go:168] "Request Body" body=""
	I1217 20:29:59.822844  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:59.823119  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:00.323032  522827 type.go:168] "Request Body" body=""
	I1217 20:30:00.323116  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:00.323489  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:00.822257  522827 type.go:168] "Request Body" body=""
	I1217 20:30:00.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:00.822678  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:01.322375  522827 type.go:168] "Request Body" body=""
	I1217 20:30:01.322459  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:01.322808  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:01.822288  522827 type.go:168] "Request Body" body=""
	I1217 20:30:01.822382  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:01.822690  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:01.822741  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:02.049201  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:30:02.136097  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:02.136138  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:02.136156  522827 retry.go:31] will retry after 5.567678678s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:02.322750  522827 type.go:168] "Request Body" body=""
	I1217 20:30:02.322843  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:02.323173  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:02.822939  522827 type.go:168] "Request Body" body=""
	I1217 20:30:02.823008  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:02.823350  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:03.323175  522827 type.go:168] "Request Body" body=""
	I1217 20:30:03.323258  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:03.323612  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:03.822172  522827 type.go:168] "Request Body" body=""
	I1217 20:30:03.822257  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:03.822603  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:04.322314  522827 type.go:168] "Request Body" body=""
	I1217 20:30:04.322401  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:04.322723  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:04.322781  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:04.822229  522827 type.go:168] "Request Body" body=""
	I1217 20:30:04.822306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:04.822675  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:05.322398  522827 type.go:168] "Request Body" body=""
	I1217 20:30:05.322478  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:05.322839  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:05.346115  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:30:05.408232  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:05.408289  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:05.408313  522827 retry.go:31] will retry after 10.078206747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:05.822854  522827 type.go:168] "Request Body" body=""
	I1217 20:30:05.822945  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:05.823317  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:06.323102  522827 type.go:168] "Request Body" body=""
	I1217 20:30:06.323172  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:06.323471  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:06.323519  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:06.822291  522827 type.go:168] "Request Body" body=""
	I1217 20:30:06.822371  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:06.822701  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:07.322790  522827 type.go:168] "Request Body" body=""
	I1217 20:30:07.322867  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:07.323162  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:07.703974  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:30:07.764647  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:07.764701  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:07.764721  522827 retry.go:31] will retry after 19.009086903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:07.822843  522827 type.go:168] "Request Body" body=""
	I1217 20:30:07.822915  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:07.823267  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:08.323079  522827 type.go:168] "Request Body" body=""
	I1217 20:30:08.323151  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:08.323471  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:08.822188  522827 type.go:168] "Request Body" body=""
	I1217 20:30:08.822263  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:08.822521  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:08.822572  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:09.322241  522827 type.go:168] "Request Body" body=""
	I1217 20:30:09.322340  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:09.322671  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:09.822374  522827 type.go:168] "Request Body" body=""
	I1217 20:30:09.822457  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:09.822805  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:10.322483  522827 type.go:168] "Request Body" body=""
	I1217 20:30:10.322552  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:10.322843  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:10.822281  522827 type.go:168] "Request Body" body=""
	I1217 20:30:10.822358  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:10.822652  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:10.822700  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:11.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:30:11.322352  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:11.322672  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:11.822207  522827 type.go:168] "Request Body" body=""
	I1217 20:30:11.822286  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:11.822549  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:12.322594  522827 type.go:168] "Request Body" body=""
	I1217 20:30:12.322674  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:12.322988  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:12.822976  522827 type.go:168] "Request Body" body=""
	I1217 20:30:12.823050  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:12.823410  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:12.823463  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:13.322144  522827 type.go:168] "Request Body" body=""
	I1217 20:30:13.322232  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:13.322521  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:13.822230  522827 type.go:168] "Request Body" body=""
	I1217 20:30:13.822307  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:13.822623  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:14.322243  522827 type.go:168] "Request Body" body=""
	I1217 20:30:14.322319  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:14.322666  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:14.822203  522827 type.go:168] "Request Body" body=""
	I1217 20:30:14.822311  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:14.822605  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:15.322246  522827 type.go:168] "Request Body" body=""
	I1217 20:30:15.322320  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:15.322647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:15.322700  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:15.487149  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:30:15.557091  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:15.557136  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:15.557155  522827 retry.go:31] will retry after 12.964696684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:15.822271  522827 type.go:168] "Request Body" body=""
	I1217 20:30:15.822351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:15.822662  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:16.322350  522827 type.go:168] "Request Body" body=""
	I1217 20:30:16.322453  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:16.322760  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:16.822273  522827 type.go:168] "Request Body" body=""
	I1217 20:30:16.822358  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:16.822736  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:17.322687  522827 type.go:168] "Request Body" body=""
	I1217 20:30:17.322762  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:17.323107  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:17.323170  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:17.822929  522827 type.go:168] "Request Body" body=""
	I1217 20:30:17.823010  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:17.823369  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:18.322156  522827 type.go:168] "Request Body" body=""
	I1217 20:30:18.322228  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:18.322549  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:18.822291  522827 type.go:168] "Request Body" body=""
	I1217 20:30:18.822375  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:18.822749  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:19.322331  522827 type.go:168] "Request Body" body=""
	I1217 20:30:19.322407  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:19.322669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:19.822262  522827 type.go:168] "Request Body" body=""
	I1217 20:30:19.822338  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:19.822665  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:19.822723  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:20.322409  522827 type.go:168] "Request Body" body=""
	I1217 20:30:20.322504  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:20.322816  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:20.822195  522827 type.go:168] "Request Body" body=""
	I1217 20:30:20.822272  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:20.822589  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:21.322282  522827 type.go:168] "Request Body" body=""
	I1217 20:30:21.322360  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:21.322714  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:21.822462  522827 type.go:168] "Request Body" body=""
	I1217 20:30:21.822537  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:21.822878  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:21.822935  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:22.322758  522827 type.go:168] "Request Body" body=""
	I1217 20:30:22.322831  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:22.323144  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:22.823099  522827 type.go:168] "Request Body" body=""
	I1217 20:30:22.823175  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:22.823543  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:23.322157  522827 type.go:168] "Request Body" body=""
	I1217 20:30:23.322244  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:23.322584  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:23.822276  522827 type.go:168] "Request Body" body=""
	I1217 20:30:23.822380  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:23.822675  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:24.322366  522827 type.go:168] "Request Body" body=""
	I1217 20:30:24.322440  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:24.322775  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:24.322830  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:24.822521  522827 type.go:168] "Request Body" body=""
	I1217 20:30:24.822606  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:24.822923  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:25.322198  522827 type.go:168] "Request Body" body=""
	I1217 20:30:25.322283  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:25.322621  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:25.822315  522827 type.go:168] "Request Body" body=""
	I1217 20:30:25.822391  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:25.822741  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:26.322318  522827 type.go:168] "Request Body" body=""
	I1217 20:30:26.322399  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:26.322716  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:26.774084  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:30:26.822641  522827 type.go:168] "Request Body" body=""
	I1217 20:30:26.822719  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:26.822976  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:26.823028  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:26.837910  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:26.841500  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:26.841530  522827 retry.go:31] will retry after 11.131595667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:27.322446  522827 type.go:168] "Request Body" body=""
	I1217 20:30:27.322527  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:27.322849  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:27.822542  522827 type.go:168] "Request Body" body=""
	I1217 20:30:27.822619  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:27.822938  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:28.322188  522827 type.go:168] "Request Body" body=""
	I1217 20:30:28.322255  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:28.322560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:28.523062  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:30:28.580613  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:28.584486  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:28.584522  522827 retry.go:31] will retry after 27.188888106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:28.822927  522827 type.go:168] "Request Body" body=""
	I1217 20:30:28.823014  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:28.823356  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:28.823415  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:29.323074  522827 type.go:168] "Request Body" body=""
	I1217 20:30:29.323146  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:29.323504  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:29.822233  522827 type.go:168] "Request Body" body=""
	I1217 20:30:29.822317  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:29.822584  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:30.322255  522827 type.go:168] "Request Body" body=""
	I1217 20:30:30.322340  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:30.322682  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:30.822323  522827 type.go:168] "Request Body" body=""
	I1217 20:30:30.822402  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:30.822702  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:31.322380  522827 type.go:168] "Request Body" body=""
	I1217 20:30:31.322461  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:31.322751  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:31.322805  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:31.822248  522827 type.go:168] "Request Body" body=""
	I1217 20:30:31.822328  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:31.822687  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:32.322527  522827 type.go:168] "Request Body" body=""
	I1217 20:30:32.322604  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:32.322970  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:32.822789  522827 type.go:168] "Request Body" body=""
	I1217 20:30:32.822862  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:32.823113  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:33.322853  522827 type.go:168] "Request Body" body=""
	I1217 20:30:33.322933  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:33.323261  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:33.323318  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:33.823136  522827 type.go:168] "Request Body" body=""
	I1217 20:30:33.823234  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:33.823604  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:34.322280  522827 type.go:168] "Request Body" body=""
	I1217 20:30:34.322349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:34.322657  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:34.822228  522827 type.go:168] "Request Body" body=""
	I1217 20:30:34.822303  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:34.822672  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:35.322420  522827 type.go:168] "Request Body" body=""
	I1217 20:30:35.322511  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:35.322908  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:35.822529  522827 type.go:168] "Request Body" body=""
	I1217 20:30:35.822596  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:35.822853  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:35.822892  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:36.322252  522827 type.go:168] "Request Body" body=""
	I1217 20:30:36.322343  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:36.322684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:36.822246  522827 type.go:168] "Request Body" body=""
	I1217 20:30:36.822323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:36.822689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:37.322549  522827 type.go:168] "Request Body" body=""
	I1217 20:30:37.322619  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:37.322889  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:37.822237  522827 type.go:168] "Request Body" body=""
	I1217 20:30:37.822321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:37.822668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:37.974039  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:30:38.040817  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:38.040869  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:38.040889  522827 retry.go:31] will retry after 31.049103728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:38.322172  522827 type.go:168] "Request Body" body=""
	I1217 20:30:38.322246  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:38.322560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:38.322614  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:38.822324  522827 type.go:168] "Request Body" body=""
	I1217 20:30:38.822398  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:38.822724  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:39.322269  522827 type.go:168] "Request Body" body=""
	I1217 20:30:39.322343  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:39.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:39.822351  522827 type.go:168] "Request Body" body=""
	I1217 20:30:39.822429  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:39.822784  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:40.322493  522827 type.go:168] "Request Body" body=""
	I1217 20:30:40.322565  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:40.322832  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:40.322881  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:40.822257  522827 type.go:168] "Request Body" body=""
	I1217 20:30:40.822336  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:40.822683  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:41.322402  522827 type.go:168] "Request Body" body=""
	I1217 20:30:41.322476  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:41.322798  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:41.822322  522827 type.go:168] "Request Body" body=""
	I1217 20:30:41.822410  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:41.822673  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:42.322668  522827 type.go:168] "Request Body" body=""
	I1217 20:30:42.322753  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:42.323078  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:42.323134  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:42.822880  522827 type.go:168] "Request Body" body=""
	I1217 20:30:42.822964  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:42.823451  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:43.322210  522827 type.go:168] "Request Body" body=""
	I1217 20:30:43.322284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:43.322583  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:43.822235  522827 type.go:168] "Request Body" body=""
	I1217 20:30:43.822309  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:43.822654  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:44.322386  522827 type.go:168] "Request Body" body=""
	I1217 20:30:44.322461  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:44.322790  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:44.822318  522827 type.go:168] "Request Body" body=""
	I1217 20:30:44.822384  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:44.822682  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:44.822724  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:45.322416  522827 type.go:168] "Request Body" body=""
	I1217 20:30:45.322496  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:45.322829  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:45.822255  522827 type.go:168] "Request Body" body=""
	I1217 20:30:45.822332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:45.822669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:46.322325  522827 type.go:168] "Request Body" body=""
	I1217 20:30:46.322400  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:46.322665  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:46.822380  522827 type.go:168] "Request Body" body=""
	I1217 20:30:46.822473  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:46.822817  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:46.822872  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:47.322661  522827 type.go:168] "Request Body" body=""
	I1217 20:30:47.322735  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:47.323065  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:47.822781  522827 type.go:168] "Request Body" body=""
	I1217 20:30:47.822857  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:47.823119  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:48.322897  522827 type.go:168] "Request Body" body=""
	I1217 20:30:48.322974  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:48.323345  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:48.823144  522827 type.go:168] "Request Body" body=""
	I1217 20:30:48.823234  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:48.823560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:48.823640  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:49.322261  522827 type.go:168] "Request Body" body=""
	I1217 20:30:49.322332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:49.322595  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:49.822322  522827 type.go:168] "Request Body" body=""
	I1217 20:30:49.822426  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:49.822794  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:50.322509  522827 type.go:168] "Request Body" body=""
	I1217 20:30:50.322590  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:50.322932  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:50.822546  522827 type.go:168] "Request Body" body=""
	I1217 20:30:50.822615  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:50.822946  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:51.322643  522827 type.go:168] "Request Body" body=""
	I1217 20:30:51.322718  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:51.323024  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:51.323070  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:51.822296  522827 type.go:168] "Request Body" body=""
	I1217 20:30:51.822374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:51.822687  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:52.322694  522827 type.go:168] "Request Body" body=""
	I1217 20:30:52.322784  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:52.323124  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:52.823081  522827 type.go:168] "Request Body" body=""
	I1217 20:30:52.823156  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:52.823526  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:53.322244  522827 type.go:168] "Request Body" body=""
	I1217 20:30:53.322320  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:53.322655  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:53.822344  522827 type.go:168] "Request Body" body=""
	I1217 20:30:53.822418  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:53.822697  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:53.822743  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:54.322237  522827 type.go:168] "Request Body" body=""
	I1217 20:30:54.322316  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:54.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:54.822361  522827 type.go:168] "Request Body" body=""
	I1217 20:30:54.822444  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:54.822766  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:55.322219  522827 type.go:168] "Request Body" body=""
	I1217 20:30:55.322295  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:55.322552  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:55.774295  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:30:55.822774  522827 type.go:168] "Request Body" body=""
	I1217 20:30:55.822854  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:55.823178  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:55.823237  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:55.835665  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:55.835703  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:55.835722  522827 retry.go:31] will retry after 28.301795669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:56.322365  522827 type.go:168] "Request Body" body=""
	I1217 20:30:56.322444  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:56.322778  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:56.822439  522827 type.go:168] "Request Body" body=""
	I1217 20:30:56.822508  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:56.822820  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:57.322747  522827 type.go:168] "Request Body" body=""
	I1217 20:30:57.322819  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:57.323147  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:57.822918  522827 type.go:168] "Request Body" body=""
	I1217 20:30:57.822997  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:57.823341  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:57.823393  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:58.322987  522827 type.go:168] "Request Body" body=""
	I1217 20:30:58.323064  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:58.323342  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:58.823128  522827 type.go:168] "Request Body" body=""
	I1217 20:30:58.823221  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:58.823576  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:59.322297  522827 type.go:168] "Request Body" body=""
	I1217 20:30:59.322372  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:59.322714  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:59.822279  522827 type.go:168] "Request Body" body=""
	I1217 20:30:59.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:59.822614  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:00.322456  522827 type.go:168] "Request Body" body=""
	I1217 20:31:00.322571  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:00.322881  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:00.322948  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:00.822606  522827 type.go:168] "Request Body" body=""
	I1217 20:31:00.822685  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:00.823029  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:01.322805  522827 type.go:168] "Request Body" body=""
	I1217 20:31:01.322882  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:01.323144  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:01.822946  522827 type.go:168] "Request Body" body=""
	I1217 20:31:01.823024  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:01.823411  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:02.322195  522827 type.go:168] "Request Body" body=""
	I1217 20:31:02.322276  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:02.322604  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:02.822463  522827 type.go:168] "Request Body" body=""
	I1217 20:31:02.822531  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:02.822797  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:02.822839  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:03.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:31:03.322304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:03.322643  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:03.822222  522827 type.go:168] "Request Body" body=""
	I1217 20:31:03.822303  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:03.822665  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:04.322217  522827 type.go:168] "Request Body" body=""
	I1217 20:31:04.322297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:04.322674  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:04.822410  522827 type.go:168] "Request Body" body=""
	I1217 20:31:04.822489  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:04.822833  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:04.822889  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:05.322559  522827 type.go:168] "Request Body" body=""
	I1217 20:31:05.322643  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:05.323009  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:05.822714  522827 type.go:168] "Request Body" body=""
	I1217 20:31:05.822789  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:05.823090  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:06.322858  522827 type.go:168] "Request Body" body=""
	I1217 20:31:06.322935  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:06.323252  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:06.823001  522827 type.go:168] "Request Body" body=""
	I1217 20:31:06.823088  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:06.823427  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:06.823482  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:07.322676  522827 type.go:168] "Request Body" body=""
	I1217 20:31:07.322779  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:07.323088  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:07.822882  522827 type.go:168] "Request Body" body=""
	I1217 20:31:07.822978  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:07.823462  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:08.322191  522827 type.go:168] "Request Body" body=""
	I1217 20:31:08.322284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:08.322582  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:08.822182  522827 type.go:168] "Request Body" body=""
	I1217 20:31:08.822262  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:08.822524  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:09.091155  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:31:09.152330  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:31:09.155944  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:31:09.156044  522827 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 20:31:09.322225  522827 type.go:168] "Request Body" body=""
	I1217 20:31:09.322322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:09.322666  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:09.322722  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:09.822389  522827 type.go:168] "Request Body" body=""
	I1217 20:31:09.822485  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:09.822808  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:10.322485  522827 type.go:168] "Request Body" body=""
	I1217 20:31:10.322557  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:10.322813  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:10.822228  522827 type.go:168] "Request Body" body=""
	I1217 20:31:10.822305  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:10.822670  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:11.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:31:11.322323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:11.322659  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:11.822317  522827 type.go:168] "Request Body" body=""
	I1217 20:31:11.822395  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:11.822653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:11.822709  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:12.322704  522827 type.go:168] "Request Body" body=""
	I1217 20:31:12.322778  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:12.323076  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:12.822968  522827 type.go:168] "Request Body" body=""
	I1217 20:31:12.823050  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:12.823387  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:13.323001  522827 type.go:168] "Request Body" body=""
	I1217 20:31:13.323088  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:13.323368  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:13.823235  522827 type.go:168] "Request Body" body=""
	I1217 20:31:13.823315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:13.823670  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:13.823726  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:14.322222  522827 type.go:168] "Request Body" body=""
	I1217 20:31:14.322295  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:14.322647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:14.822221  522827 type.go:168] "Request Body" body=""
	I1217 20:31:14.822300  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:14.822581  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:15.322323  522827 type.go:168] "Request Body" body=""
	I1217 20:31:15.322403  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:15.322715  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:15.822407  522827 type.go:168] "Request Body" body=""
	I1217 20:31:15.822512  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:15.822811  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:16.322304  522827 type.go:168] "Request Body" body=""
	I1217 20:31:16.322385  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:16.322637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:16.322683  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:16.822297  522827 type.go:168] "Request Body" body=""
	I1217 20:31:16.822416  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:16.822748  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:17.322737  522827 type.go:168] "Request Body" body=""
	I1217 20:31:17.322810  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:17.323096  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:17.822837  522827 type.go:168] "Request Body" body=""
	I1217 20:31:17.822931  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:17.823257  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:18.323065  522827 type.go:168] "Request Body" body=""
	I1217 20:31:18.323140  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:18.323508  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:18.323570  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:18.822258  522827 type.go:168] "Request Body" body=""
	I1217 20:31:18.822342  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:18.822689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:19.322395  522827 type.go:168] "Request Body" body=""
	I1217 20:31:19.322475  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:19.322822  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:19.822246  522827 type.go:168] "Request Body" body=""
	I1217 20:31:19.822344  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:19.822647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:20.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:31:20.322363  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:20.322714  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:20.822389  522827 type.go:168] "Request Body" body=""
	I1217 20:31:20.822466  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:20.822785  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:20.822834  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:21.322233  522827 type.go:168] "Request Body" body=""
	I1217 20:31:21.322331  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:21.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:21.822347  522827 type.go:168] "Request Body" body=""
	I1217 20:31:21.822422  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:21.822747  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:22.322631  522827 type.go:168] "Request Body" body=""
	I1217 20:31:22.322703  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:22.322965  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:22.822936  522827 type.go:168] "Request Body" body=""
	I1217 20:31:22.823012  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:22.823323  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:22.823370  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:23.323099  522827 type.go:168] "Request Body" body=""
	I1217 20:31:23.323180  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:23.323479  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:23.822130  522827 type.go:168] "Request Body" body=""
	I1217 20:31:23.822204  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:23.822471  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:24.138134  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:31:24.201991  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:31:24.202036  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:31:24.202117  522827 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 20:31:24.205262  522827 out.go:179] * Enabled addons: 
	I1217 20:31:24.208903  522827 addons.go:530] duration metric: took 1m41.560475312s for enable addons: enabled=[]
	I1217 20:31:24.322240  522827 type.go:168] "Request Body" body=""
	I1217 20:31:24.322318  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:24.322668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:24.822384  522827 type.go:168] "Request Body" body=""
	I1217 20:31:24.822478  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:24.822815  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:25.322366  522827 type.go:168] "Request Body" body=""
	I1217 20:31:25.322441  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:25.322753  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:25.322800  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:25.822458  522827 type.go:168] "Request Body" body=""
	I1217 20:31:25.822532  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:25.822902  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:26.322508  522827 type.go:168] "Request Body" body=""
	I1217 20:31:26.322584  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:26.322912  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:26.822194  522827 type.go:168] "Request Body" body=""
	I1217 20:31:26.822272  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:26.822592  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:27.322423  522827 type.go:168] "Request Body" body=""
	I1217 20:31:27.322530  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:27.322841  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:27.322894  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:27.822547  522827 type.go:168] "Request Body" body=""
	I1217 20:31:27.822621  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:27.822984  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:28.322302  522827 type.go:168] "Request Body" body=""
	I1217 20:31:28.322385  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:28.322685  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:28.822382  522827 type.go:168] "Request Body" body=""
	I1217 20:31:28.822464  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:28.822833  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:29.322567  522827 type.go:168] "Request Body" body=""
	I1217 20:31:29.322643  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:29.322987  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:29.323043  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:29.822734  522827 type.go:168] "Request Body" body=""
	I1217 20:31:29.822807  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:29.823076  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:30.322834  522827 type.go:168] "Request Body" body=""
	I1217 20:31:30.322906  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:30.323262  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:30.823096  522827 type.go:168] "Request Body" body=""
	I1217 20:31:30.823184  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:30.823505  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:31.322243  522827 type.go:168] "Request Body" body=""
	I1217 20:31:31.322319  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:31.322606  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:31.822221  522827 type.go:168] "Request Body" body=""
	I1217 20:31:31.822295  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:31.822614  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:31.822668  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:32.322585  522827 type.go:168] "Request Body" body=""
	I1217 20:31:32.322665  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:32.322989  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:32.822991  522827 type.go:168] "Request Body" body=""
	I1217 20:31:32.823063  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:32.823325  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:33.323053  522827 type.go:168] "Request Body" body=""
	I1217 20:31:33.323151  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:33.323496  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:33.822867  522827 type.go:168] "Request Body" body=""
	I1217 20:31:33.822946  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:33.823324  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:33.823391  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:34.323215  522827 type.go:168] "Request Body" body=""
	I1217 20:31:34.323300  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:34.323630  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:34.822311  522827 type.go:168] "Request Body" body=""
	I1217 20:31:34.822386  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:34.822717  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:35.322215  522827 type.go:168] "Request Body" body=""
	I1217 20:31:35.322293  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:35.322668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:35.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:31:35.822284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:35.822539  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:36.322256  522827 type.go:168] "Request Body" body=""
	I1217 20:31:36.322344  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:36.322708  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:36.322778  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:36.822306  522827 type.go:168] "Request Body" body=""
	I1217 20:31:36.822387  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:36.822729  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:37.322707  522827 type.go:168] "Request Body" body=""
	I1217 20:31:37.322775  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:37.323029  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:37.822287  522827 type.go:168] "Request Body" body=""
	I1217 20:31:37.822373  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:37.823676  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 20:31:38.322400  522827 type.go:168] "Request Body" body=""
	I1217 20:31:38.322477  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:38.322802  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:38.322850  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:38.822462  522827 type.go:168] "Request Body" body=""
	I1217 20:31:38.822552  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:38.822813  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:39.322538  522827 type.go:168] "Request Body" body=""
	I1217 20:31:39.322613  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:39.322992  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:39.822813  522827 type.go:168] "Request Body" body=""
	I1217 20:31:39.822889  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:39.823220  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:40.322969  522827 type.go:168] "Request Body" body=""
	I1217 20:31:40.323049  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:40.323311  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:40.323365  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:40.823132  522827 type.go:168] "Request Body" body=""
	I1217 20:31:40.823213  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:40.823556  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:41.322295  522827 type.go:168] "Request Body" body=""
	I1217 20:31:41.322379  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:41.322741  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:41.822257  522827 type.go:168] "Request Body" body=""
	I1217 20:31:41.822325  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:41.822584  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:42.322236  522827 type.go:168] "Request Body" body=""
	I1217 20:31:42.322354  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:42.322714  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:42.822359  522827 type.go:168] "Request Body" body=""
	I1217 20:31:42.822447  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:42.822773  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:42.822824  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:43.322198  522827 type.go:168] "Request Body" body=""
	I1217 20:31:43.322269  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:43.322552  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:43.822223  522827 type.go:168] "Request Body" body=""
	I1217 20:31:43.822299  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:43.822646  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:44.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:31:44.322310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:44.322646  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:44.822277  522827 type.go:168] "Request Body" body=""
	I1217 20:31:44.822351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:44.822615  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:45.322247  522827 type.go:168] "Request Body" body=""
	I1217 20:31:45.322349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:45.322649  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:45.322699  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:45.822364  522827 type.go:168] "Request Body" body=""
	I1217 20:31:45.822446  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:45.822791  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:46.322336  522827 type.go:168] "Request Body" body=""
	I1217 20:31:46.322408  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:46.322712  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:46.822435  522827 type.go:168] "Request Body" body=""
	I1217 20:31:46.822522  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:46.822879  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:47.322808  522827 type.go:168] "Request Body" body=""
	I1217 20:31:47.322888  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:47.323217  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:47.323277  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:47.823026  522827 type.go:168] "Request Body" body=""
	I1217 20:31:47.823100  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:47.823372  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:48.323164  522827 type.go:168] "Request Body" body=""
	I1217 20:31:48.323244  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:48.323562  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:48.822251  522827 type.go:168] "Request Body" body=""
	I1217 20:31:48.822329  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:48.822696  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:49.322381  522827 type.go:168] "Request Body" body=""
	I1217 20:31:49.322457  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:49.322785  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:49.822503  522827 type.go:168] "Request Body" body=""
	I1217 20:31:49.822582  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:49.822896  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:49.822946  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:50.322276  522827 type.go:168] "Request Body" body=""
	I1217 20:31:50.322366  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:50.322737  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:50.822198  522827 type.go:168] "Request Body" body=""
	I1217 20:31:50.822270  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:50.822542  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:51.322250  522827 type.go:168] "Request Body" body=""
	I1217 20:31:51.322336  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:51.322688  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:51.822279  522827 type.go:168] "Request Body" body=""
	I1217 20:31:51.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:51.822647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:52.322172  522827 type.go:168] "Request Body" body=""
	I1217 20:31:52.322269  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:52.322529  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:52.322584  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:52.822307  522827 type.go:168] "Request Body" body=""
	I1217 20:31:52.822381  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:52.822703  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:53.322352  522827 type.go:168] "Request Body" body=""
	I1217 20:31:53.322443  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:53.322765  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:53.822450  522827 type.go:168] "Request Body" body=""
	I1217 20:31:53.822519  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:53.822836  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:54.322259  522827 type.go:168] "Request Body" body=""
	I1217 20:31:54.322342  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:54.322677  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:54.322737  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:54.822413  522827 type.go:168] "Request Body" body=""
	I1217 20:31:54.822500  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:54.822844  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:55.322509  522827 type.go:168] "Request Body" body=""
	I1217 20:31:55.322590  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:55.322859  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:55.822209  522827 type.go:168] "Request Body" body=""
	I1217 20:31:55.822286  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:55.822615  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:56.322334  522827 type.go:168] "Request Body" body=""
	I1217 20:31:56.322412  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:56.322700  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:56.822188  522827 type.go:168] "Request Body" body=""
	I1217 20:31:56.822256  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:56.822570  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:56.822617  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:57.322493  522827 type.go:168] "Request Body" body=""
	I1217 20:31:57.322571  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:57.322891  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:57.822474  522827 type.go:168] "Request Body" body=""
	I1217 20:31:57.822550  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:57.822881  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:58.322311  522827 type.go:168] "Request Body" body=""
	I1217 20:31:58.322386  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:58.322639  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:58.822249  522827 type.go:168] "Request Body" body=""
	I1217 20:31:58.822326  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:58.822659  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:58.822714  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:59.322242  522827 type.go:168] "Request Body" body=""
	I1217 20:31:59.322321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:59.322689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:59.822316  522827 type.go:168] "Request Body" body=""
	I1217 20:31:59.822386  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:59.822717  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:00.322382  522827 type.go:168] "Request Body" body=""
	I1217 20:32:00.322473  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:00.322812  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:00.822290  522827 type.go:168] "Request Body" body=""
	I1217 20:32:00.822374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:00.822752  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:00.822812  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:01.322354  522827 type.go:168] "Request Body" body=""
	I1217 20:32:01.322434  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:01.322743  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:01.822229  522827 type.go:168] "Request Body" body=""
	I1217 20:32:01.822312  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:01.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:02.322687  522827 type.go:168] "Request Body" body=""
	I1217 20:32:02.322779  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:02.323110  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:02.823078  522827 type.go:168] "Request Body" body=""
	I1217 20:32:02.823185  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:02.823454  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:02.823500  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:03.322198  522827 type.go:168] "Request Body" body=""
	I1217 20:32:03.322280  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:03.322619  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:03.822356  522827 type.go:168] "Request Body" body=""
	I1217 20:32:03.822434  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:03.822736  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:04.322315  522827 type.go:168] "Request Body" body=""
	I1217 20:32:04.322389  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:04.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:04.822260  522827 type.go:168] "Request Body" body=""
	I1217 20:32:04.822366  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:04.822762  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:05.322471  522827 type.go:168] "Request Body" body=""
	I1217 20:32:05.322560  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:05.322916  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:05.322977  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:05.822615  522827 type.go:168] "Request Body" body=""
	I1217 20:32:05.822691  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:05.823031  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:06.322818  522827 type.go:168] "Request Body" body=""
	I1217 20:32:06.322895  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:06.323223  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:06.822995  522827 type.go:168] "Request Body" body=""
	I1217 20:32:06.823069  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:06.823419  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:07.322171  522827 type.go:168] "Request Body" body=""
	I1217 20:32:07.322242  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:07.322555  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:07.822236  522827 type.go:168] "Request Body" body=""
	I1217 20:32:07.822316  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:07.822639  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:07.822694  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:08.322234  522827 type.go:168] "Request Body" body=""
	I1217 20:32:08.322313  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:08.322610  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:08.822290  522827 type.go:168] "Request Body" body=""
	I1217 20:32:08.822368  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:08.822630  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:09.322201  522827 type.go:168] "Request Body" body=""
	I1217 20:32:09.322283  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:09.322629  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:09.822331  522827 type.go:168] "Request Body" body=""
	I1217 20:32:09.822412  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:09.822739  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:09.822812  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:10.322224  522827 type.go:168] "Request Body" body=""
	I1217 20:32:10.322297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:10.322657  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:10.822387  522827 type.go:168] "Request Body" body=""
	I1217 20:32:10.822470  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:10.822875  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:11.322246  522827 type.go:168] "Request Body" body=""
	I1217 20:32:11.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:11.322696  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:11.822377  522827 type.go:168] "Request Body" body=""
	I1217 20:32:11.822447  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:11.822730  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:12.322684  522827 type.go:168] "Request Body" body=""
	I1217 20:32:12.322757  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:12.323075  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:12.323135  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:12.823123  522827 type.go:168] "Request Body" body=""
	I1217 20:32:12.823215  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:12.823567  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:13.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:32:13.322361  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:13.322653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:13.822251  522827 type.go:168] "Request Body" body=""
	I1217 20:32:13.822330  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:13.822693  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:14.322242  522827 type.go:168] "Request Body" body=""
	I1217 20:32:14.322324  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:14.322673  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:14.822355  522827 type.go:168] "Request Body" body=""
	I1217 20:32:14.822428  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:14.822689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:14.822736  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:15.322245  522827 type.go:168] "Request Body" body=""
	I1217 20:32:15.322322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:15.322646  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:15.822224  522827 type.go:168] "Request Body" body=""
	I1217 20:32:15.822301  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:15.822625  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:16.322176  522827 type.go:168] "Request Body" body=""
	I1217 20:32:16.322257  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:16.322573  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:16.822265  522827 type.go:168] "Request Body" body=""
	I1217 20:32:16.822341  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:16.822656  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:17.322600  522827 type.go:168] "Request Body" body=""
	I1217 20:32:17.322693  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:17.323051  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:17.323108  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:17.822821  522827 type.go:168] "Request Body" body=""
	I1217 20:32:17.822890  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:17.823195  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:18.322987  522827 type.go:168] "Request Body" body=""
	I1217 20:32:18.323062  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:18.323387  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:18.823193  522827 type.go:168] "Request Body" body=""
	I1217 20:32:18.823271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:18.823632  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:19.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:32:19.322300  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:19.322563  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:19.822248  522827 type.go:168] "Request Body" body=""
	I1217 20:32:19.822332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:19.822689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:19.822743  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:20.322270  522827 type.go:168] "Request Body" body=""
	I1217 20:32:20.322351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:20.322706  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:20.822403  522827 type.go:168] "Request Body" body=""
	I1217 20:32:20.822483  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:20.822759  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:21.322436  522827 type.go:168] "Request Body" body=""
	I1217 20:32:21.322518  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:21.322864  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:21.822578  522827 type.go:168] "Request Body" body=""
	I1217 20:32:21.822655  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:21.823020  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:21.823078  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:22.322774  522827 type.go:168] "Request Body" body=""
	I1217 20:32:22.322847  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:22.323116  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:22.823126  522827 type.go:168] "Request Body" body=""
	I1217 20:32:22.823213  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:22.823625  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:23.322331  522827 type.go:168] "Request Body" body=""
	I1217 20:32:23.322407  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:23.322751  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:23.822449  522827 type.go:168] "Request Body" body=""
	I1217 20:32:23.822519  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:23.822856  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:24.322228  522827 type.go:168] "Request Body" body=""
	I1217 20:32:24.322310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:24.322653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:24.322710  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:24.822249  522827 type.go:168] "Request Body" body=""
	I1217 20:32:24.822326  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:24.822711  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:25.322197  522827 type.go:168] "Request Body" body=""
	I1217 20:32:25.322269  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:25.322562  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:25.822261  522827 type.go:168] "Request Body" body=""
	I1217 20:32:25.822338  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:25.822615  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:26.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:32:26.322347  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:26.322669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:26.822294  522827 type.go:168] "Request Body" body=""
	I1217 20:32:26.822375  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:26.822659  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:26.822711  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:27.322690  522827 type.go:168] "Request Body" body=""
	I1217 20:32:27.322770  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:27.323105  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:27.822647  522827 type.go:168] "Request Body" body=""
	I1217 20:32:27.822726  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:27.823033  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:28.322766  522827 type.go:168] "Request Body" body=""
	I1217 20:32:28.322834  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:28.323196  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:28.822977  522827 type.go:168] "Request Body" body=""
	I1217 20:32:28.823055  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:28.823384  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:28.823437  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:29.322124  522827 type.go:168] "Request Body" body=""
	I1217 20:32:29.322205  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:29.322530  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:29.822227  522827 type.go:168] "Request Body" body=""
	I1217 20:32:29.822306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:29.822567  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:30.322239  522827 type.go:168] "Request Body" body=""
	I1217 20:32:30.322320  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:30.322615  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:30.822251  522827 type.go:168] "Request Body" body=""
	I1217 20:32:30.822335  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:30.822684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:31.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:32:31.322297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:31.322575  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:31.322631  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:31.822242  522827 type.go:168] "Request Body" body=""
	I1217 20:32:31.822318  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:31.822645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:32.322646  522827 type.go:168] "Request Body" body=""
	I1217 20:32:32.322717  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:32.323066  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:32.822921  522827 type.go:168] "Request Body" body=""
	I1217 20:32:32.822993  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:32.823283  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:33.323063  522827 type.go:168] "Request Body" body=""
	I1217 20:32:33.323158  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:33.323500  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:33.323569  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:33.822260  522827 type.go:168] "Request Body" body=""
	I1217 20:32:33.822354  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:33.822685  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:34.322405  522827 type.go:168] "Request Body" body=""
	I1217 20:32:34.322478  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:34.322748  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:34.822278  522827 type.go:168] "Request Body" body=""
	I1217 20:32:34.822374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:34.822748  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:35.322476  522827 type.go:168] "Request Body" body=""
	I1217 20:32:35.322570  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:35.322893  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:35.822173  522827 type.go:168] "Request Body" body=""
	I1217 20:32:35.822243  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:35.822502  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:35.822542  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:36.322264  522827 type.go:168] "Request Body" body=""
	I1217 20:32:36.322345  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:36.322701  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:36.822410  522827 type.go:168] "Request Body" body=""
	I1217 20:32:36.822488  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:36.822823  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:37.322668  522827 type.go:168] "Request Body" body=""
	I1217 20:32:37.322737  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:37.322989  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:37.822848  522827 type.go:168] "Request Body" body=""
	I1217 20:32:37.822924  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:37.823275  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:37.823343  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:38.323095  522827 type.go:168] "Request Body" body=""
	I1217 20:32:38.323188  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:38.323541  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:38.822238  522827 type.go:168] "Request Body" body=""
	I1217 20:32:38.822315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:38.822608  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:39.322245  522827 type.go:168] "Request Body" body=""
	I1217 20:32:39.322381  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:39.322729  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:39.822442  522827 type.go:168] "Request Body" body=""
	I1217 20:32:39.822521  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:39.822851  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:40.322537  522827 type.go:168] "Request Body" body=""
	I1217 20:32:40.322611  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:40.322918  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:40.322971  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:40.822252  522827 type.go:168] "Request Body" body=""
	I1217 20:32:40.822327  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:40.822657  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:41.322382  522827 type.go:168] "Request Body" body=""
	I1217 20:32:41.322458  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:41.322791  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:41.822307  522827 type.go:168] "Request Body" body=""
	I1217 20:32:41.822377  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:41.822665  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:42.322693  522827 type.go:168] "Request Body" body=""
	I1217 20:32:42.322766  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:42.323102  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:42.323170  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:42.823022  522827 type.go:168] "Request Body" body=""
	I1217 20:32:42.823123  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:42.823479  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:43.322175  522827 type.go:168] "Request Body" body=""
	I1217 20:32:43.322271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:43.322523  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:43.822319  522827 type.go:168] "Request Body" body=""
	I1217 20:32:43.822415  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:43.822789  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:44.322263  522827 type.go:168] "Request Body" body=""
	I1217 20:32:44.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:44.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:44.822216  522827 type.go:168] "Request Body" body=""
	I1217 20:32:44.822287  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:44.822560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:44.822601  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:45.322398  522827 type.go:168] "Request Body" body=""
	I1217 20:32:45.322606  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:45.323117  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:45.823034  522827 type.go:168] "Request Body" body=""
	I1217 20:32:45.823140  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:45.823517  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:46.322220  522827 type.go:168] "Request Body" body=""
	I1217 20:32:46.322304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:46.322623  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:46.822261  522827 type.go:168] "Request Body" body=""
	I1217 20:32:46.822341  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:46.822687  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:46.822747  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:47.322536  522827 type.go:168] "Request Body" body=""
	I1217 20:32:47.322612  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:47.322939  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:47.822456  522827 type.go:168] "Request Body" body=""
	I1217 20:32:47.822529  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:47.822784  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:48.322237  522827 type.go:168] "Request Body" body=""
	I1217 20:32:48.322319  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:48.322675  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:48.822389  522827 type.go:168] "Request Body" body=""
	I1217 20:32:48.822473  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:48.822819  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:48.822885  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:49.322495  522827 type.go:168] "Request Body" body=""
	I1217 20:32:49.322569  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:49.322865  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:49.822558  522827 type.go:168] "Request Body" body=""
	I1217 20:32:49.822637  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:49.822970  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:50.322764  522827 type.go:168] "Request Body" body=""
	I1217 20:32:50.322842  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:50.323193  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:50.822930  522827 type.go:168] "Request Body" body=""
	I1217 20:32:50.823006  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:50.823301  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:50.823453  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:51.322133  522827 type.go:168] "Request Body" body=""
	I1217 20:32:51.322212  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:51.322566  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:51.822287  522827 type.go:168] "Request Body" body=""
	I1217 20:32:51.822362  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:51.822679  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:52.322645  522827 type.go:168] "Request Body" body=""
	I1217 20:32:52.322727  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:52.323054  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:52.823092  522827 type.go:168] "Request Body" body=""
	I1217 20:32:52.823172  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:52.823505  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:52.823559  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:53.322267  522827 type.go:168] "Request Body" body=""
	I1217 20:32:53.322354  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:53.322691  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:53.822229  522827 type.go:168] "Request Body" body=""
	I1217 20:32:53.822299  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:53.822601  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:54.322252  522827 type.go:168] "Request Body" body=""
	I1217 20:32:54.322338  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:54.322639  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:54.822220  522827 type.go:168] "Request Body" body=""
	I1217 20:32:54.822304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:54.822635  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:55.322307  522827 type.go:168] "Request Body" body=""
	I1217 20:32:55.322374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:55.322666  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:55.322723  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:55.822406  522827 type.go:168] "Request Body" body=""
	I1217 20:32:55.822481  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:55.822818  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:56.322513  522827 type.go:168] "Request Body" body=""
	I1217 20:32:56.322588  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:56.322929  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:56.822610  522827 type.go:168] "Request Body" body=""
	I1217 20:32:56.822683  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:56.823008  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:57.322760  522827 type.go:168] "Request Body" body=""
	I1217 20:32:57.322844  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:57.323218  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:57.323276  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:57.823035  522827 type.go:168] "Request Body" body=""
	I1217 20:32:57.823125  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:57.823456  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:58.322253  522827 type.go:168] "Request Body" body=""
	I1217 20:32:58.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:58.322631  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:58.822231  522827 type.go:168] "Request Body" body=""
	I1217 20:32:58.822313  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:58.822643  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:59.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:32:59.322307  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:59.322642  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:59.822186  522827 type.go:168] "Request Body" body=""
	I1217 20:32:59.822256  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:59.822567  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:59.822624  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:00.322321  522827 type.go:168] "Request Body" body=""
	I1217 20:33:00.322425  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:00.322741  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:00.822652  522827 type.go:168] "Request Body" body=""
	I1217 20:33:00.822731  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:00.823058  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:01.322828  522827 type.go:168] "Request Body" body=""
	I1217 20:33:01.322902  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:01.323234  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:01.823025  522827 type.go:168] "Request Body" body=""
	I1217 20:33:01.823111  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:01.823448  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:01.823507  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:02.322504  522827 type.go:168] "Request Body" body=""
	I1217 20:33:02.322584  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:02.322930  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:02.822578  522827 type.go:168] "Request Body" body=""
	I1217 20:33:02.822653  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:02.822924  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:03.322752  522827 type.go:168] "Request Body" body=""
	I1217 20:33:03.322834  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:03.323161  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:03.822980  522827 type.go:168] "Request Body" body=""
	I1217 20:33:03.823059  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:03.823424  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:04.322126  522827 type.go:168] "Request Body" body=""
	I1217 20:33:04.322197  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:04.322455  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:04.322500  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:04.822206  522827 type.go:168] "Request Body" body=""
	I1217 20:33:04.822286  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:04.822623  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:05.322331  522827 type.go:168] "Request Body" body=""
	I1217 20:33:05.322416  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:05.322767  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:05.822465  522827 type.go:168] "Request Body" body=""
	I1217 20:33:05.822544  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:05.822897  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:06.322236  522827 type.go:168] "Request Body" body=""
	I1217 20:33:06.322318  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:06.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:06.322719  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:06.822389  522827 type.go:168] "Request Body" body=""
	I1217 20:33:06.822469  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:06.822803  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:07.322597  522827 type.go:168] "Request Body" body=""
	I1217 20:33:07.322665  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:07.322926  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:07.822204  522827 type.go:168] "Request Body" body=""
	I1217 20:33:07.822282  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:07.822625  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:08.322315  522827 type.go:168] "Request Body" body=""
	I1217 20:33:08.322394  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:08.322734  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:08.322788  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:08.822200  522827 type.go:168] "Request Body" body=""
	I1217 20:33:08.822275  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:08.822538  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:09.322257  522827 type.go:168] "Request Body" body=""
	I1217 20:33:09.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:09.322703  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:09.822418  522827 type.go:168] "Request Body" body=""
	I1217 20:33:09.822497  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:09.822851  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:10.322301  522827 type.go:168] "Request Body" body=""
	I1217 20:33:10.322371  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:10.322635  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:10.822269  522827 type.go:168] "Request Body" body=""
	I1217 20:33:10.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:10.822626  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:10.822672  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:11.322251  522827 type.go:168] "Request Body" body=""
	I1217 20:33:11.322332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:11.322653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:11.822193  522827 type.go:168] "Request Body" body=""
	I1217 20:33:11.822295  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:11.822606  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:12.322610  522827 type.go:168] "Request Body" body=""
	I1217 20:33:12.322688  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:12.323024  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:12.822814  522827 type.go:168] "Request Body" body=""
	I1217 20:33:12.822898  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:12.823229  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:12.823291  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:13.323028  522827 type.go:168] "Request Body" body=""
	I1217 20:33:13.323108  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:13.323382  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:13.823191  522827 type.go:168] "Request Body" body=""
	I1217 20:33:13.823271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:13.823643  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:14.322366  522827 type.go:168] "Request Body" body=""
	I1217 20:33:14.322445  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:14.322788  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:14.822460  522827 type.go:168] "Request Body" body=""
	I1217 20:33:14.822538  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:14.822850  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:15.322243  522827 type.go:168] "Request Body" body=""
	I1217 20:33:15.322321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:15.322677  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:15.322736  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:15.822256  522827 type.go:168] "Request Body" body=""
	I1217 20:33:15.822335  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:15.822688  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:16.322376  522827 type.go:168] "Request Body" body=""
	I1217 20:33:16.322452  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:16.322776  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:16.822223  522827 type.go:168] "Request Body" body=""
	I1217 20:33:16.822299  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:16.822647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:17.322462  522827 type.go:168] "Request Body" body=""
	I1217 20:33:17.322538  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:17.322921  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:17.322982  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:17.822190  522827 type.go:168] "Request Body" body=""
	I1217 20:33:17.822267  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:17.822594  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:18.322239  522827 type.go:168] "Request Body" body=""
	I1217 20:33:18.322318  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:18.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:18.822360  522827 type.go:168] "Request Body" body=""
	I1217 20:33:18.822447  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:18.822810  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:19.322194  522827 type.go:168] "Request Body" body=""
	I1217 20:33:19.322274  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:19.322540  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:19.822224  522827 type.go:168] "Request Body" body=""
	I1217 20:33:19.822303  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:19.822648  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:19.822702  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:20.322363  522827 type.go:168] "Request Body" body=""
	I1217 20:33:20.322440  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:20.322810  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:20.822186  522827 type.go:168] "Request Body" body=""
	I1217 20:33:20.822289  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:20.822610  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:21.322209  522827 type.go:168] "Request Body" body=""
	I1217 20:33:21.322284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:21.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:21.822355  522827 type.go:168] "Request Body" body=""
	I1217 20:33:21.822454  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:21.822796  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:21.822847  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:22.322602  522827 type.go:168] "Request Body" body=""
	I1217 20:33:22.322708  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:22.322975  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:22.823014  522827 type.go:168] "Request Body" body=""
	I1217 20:33:22.823104  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:22.823484  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:23.322227  522827 type.go:168] "Request Body" body=""
	I1217 20:33:23.322304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:23.322655  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:23.822334  522827 type.go:168] "Request Body" body=""
	I1217 20:33:23.822402  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:23.822683  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:24.322240  522827 type.go:168] "Request Body" body=""
	I1217 20:33:24.322323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:24.322616  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:24.322662  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:24.822300  522827 type.go:168] "Request Body" body=""
	I1217 20:33:24.822380  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:24.822709  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:25.322192  522827 type.go:168] "Request Body" body=""
	I1217 20:33:25.322263  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:25.322513  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:25.822234  522827 type.go:168] "Request Body" body=""
	I1217 20:33:25.822315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:25.822664  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:26.322370  522827 type.go:168] "Request Body" body=""
	I1217 20:33:26.322443  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:26.322762  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:26.322816  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:26.822197  522827 type.go:168] "Request Body" body=""
	I1217 20:33:26.822271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:26.822589  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:27.322602  522827 type.go:168] "Request Body" body=""
	I1217 20:33:27.322684  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:27.323034  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:27.822840  522827 type.go:168] "Request Body" body=""
	I1217 20:33:27.822919  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:27.823295  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:28.323025  522827 type.go:168] "Request Body" body=""
	I1217 20:33:28.323101  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:28.323352  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:28.323391  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:28.823128  522827 type.go:168] "Request Body" body=""
	I1217 20:33:28.823210  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:28.823616  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:29.322300  522827 type.go:168] "Request Body" body=""
	I1217 20:33:29.322374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:29.322713  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:29.822206  522827 type.go:168] "Request Body" body=""
	I1217 20:33:29.822276  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:29.822536  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:30.322280  522827 type.go:168] "Request Body" body=""
	I1217 20:33:30.322356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:30.322680  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:30.822251  522827 type.go:168] "Request Body" body=""
	I1217 20:33:30.822327  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:30.822668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:30.822720  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:31.322220  522827 type.go:168] "Request Body" body=""
	I1217 20:33:31.322288  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:31.322537  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:31.822223  522827 type.go:168] "Request Body" body=""
	I1217 20:33:31.822304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:31.822622  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:32.322649  522827 type.go:168] "Request Body" body=""
	I1217 20:33:32.322726  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:32.323059  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:32.822867  522827 type.go:168] "Request Body" body=""
	I1217 20:33:32.822952  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:32.823248  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:32.823290  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:33.323108  522827 type.go:168] "Request Body" body=""
	I1217 20:33:33.323186  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:33.323537  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:33.822245  522827 type.go:168] "Request Body" body=""
	I1217 20:33:33.822321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:33.822647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:34.322173  522827 type.go:168] "Request Body" body=""
	I1217 20:33:34.322244  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:34.322543  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:34.822227  522827 type.go:168] "Request Body" body=""
	I1217 20:33:34.822310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:34.822637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:35.322393  522827 type.go:168] "Request Body" body=""
	I1217 20:33:35.322479  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:35.322809  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:35.322867  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:35.822191  522827 type.go:168] "Request Body" body=""
	I1217 20:33:35.822262  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:35.822572  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:36.322306  522827 type.go:168] "Request Body" body=""
	I1217 20:33:36.322382  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:36.322717  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:36.822442  522827 type.go:168] "Request Body" body=""
	I1217 20:33:36.822520  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:36.822854  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:37.322749  522827 type.go:168] "Request Body" body=""
	I1217 20:33:37.322816  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:37.323098  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:37.323140  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:37.822974  522827 type.go:168] "Request Body" body=""
	I1217 20:33:37.823045  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:37.823647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:38.322337  522827 type.go:168] "Request Body" body=""
	I1217 20:33:38.322414  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:38.322731  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:38.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:33:38.822273  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:38.822622  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:39.322260  522827 type.go:168] "Request Body" body=""
	I1217 20:33:39.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:39.322710  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:39.822274  522827 type.go:168] "Request Body" body=""
	I1217 20:33:39.822350  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:39.822691  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:39.822743  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:40.322386  522827 type.go:168] "Request Body" body=""
	I1217 20:33:40.322461  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:40.322777  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:40.822263  522827 type.go:168] "Request Body" body=""
	I1217 20:33:40.822339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:40.822619  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:41.322249  522827 type.go:168] "Request Body" body=""
	I1217 20:33:41.322340  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:41.322697  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:41.822374  522827 type.go:168] "Request Body" body=""
	I1217 20:33:41.822453  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:41.822786  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:41.822845  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:42.322518  522827 type.go:168] "Request Body" body=""
	I1217 20:33:42.322620  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:42.323128  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:42.823194  522827 type.go:168] "Request Body" body=""
	I1217 20:33:42.823280  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:42.823645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:43.322176  522827 type.go:168] "Request Body" body=""
	I1217 20:33:43.322242  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:43.322490  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:43.822197  522827 type.go:168] "Request Body" body=""
	I1217 20:33:43.822292  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:43.822663  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:44.322229  522827 type.go:168] "Request Body" body=""
	I1217 20:33:44.322308  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:44.322678  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:44.322735  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:44.822242  522827 type.go:168] "Request Body" body=""
	I1217 20:33:44.822312  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:44.822572  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:45.322246  522827 type.go:168] "Request Body" body=""
	I1217 20:33:45.322321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:45.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:45.822380  522827 type.go:168] "Request Body" body=""
	I1217 20:33:45.822458  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:45.822809  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:46.322495  522827 type.go:168] "Request Body" body=""
	I1217 20:33:46.322574  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:46.322896  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:46.322955  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:46.822620  522827 type.go:168] "Request Body" body=""
	I1217 20:33:46.822697  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:46.823021  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:47.322811  522827 type.go:168] "Request Body" body=""
	I1217 20:33:47.322892  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:47.323256  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:47.823109  522827 type.go:168] "Request Body" body=""
	I1217 20:33:47.823190  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:47.823487  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:48.322186  522827 type.go:168] "Request Body" body=""
	I1217 20:33:48.322263  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:48.322612  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:48.822323  522827 type.go:168] "Request Body" body=""
	I1217 20:33:48.822399  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:48.822726  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:48.822794  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:49.322196  522827 type.go:168] "Request Body" body=""
	I1217 20:33:49.322273  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:49.322588  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:49.822263  522827 type.go:168] "Request Body" body=""
	I1217 20:33:49.822348  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:49.822724  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:50.322473  522827 type.go:168] "Request Body" body=""
	I1217 20:33:50.322557  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:50.322925  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:50.822209  522827 type.go:168] "Request Body" body=""
	I1217 20:33:50.822284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:50.822560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:51.322238  522827 type.go:168] "Request Body" body=""
	I1217 20:33:51.322322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:51.322661  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:51.322714  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:51.822385  522827 type.go:168] "Request Body" body=""
	I1217 20:33:51.822483  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:51.822831  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:52.322696  522827 type.go:168] "Request Body" body=""
	I1217 20:33:52.322769  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:52.323046  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:52.823035  522827 type.go:168] "Request Body" body=""
	I1217 20:33:52.823114  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:52.823430  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:53.322170  522827 type.go:168] "Request Body" body=""
	I1217 20:33:53.322245  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:53.322568  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:53.822148  522827 type.go:168] "Request Body" body=""
	I1217 20:33:53.822225  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:53.822487  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:53.822527  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:54.322252  522827 type.go:168] "Request Body" body=""
	I1217 20:33:54.322346  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:54.322676  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:54.822391  522827 type.go:168] "Request Body" body=""
	I1217 20:33:54.822487  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:54.822807  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:55.322470  522827 type.go:168] "Request Body" body=""
	I1217 20:33:55.322551  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:55.322876  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:55.822280  522827 type.go:168] "Request Body" body=""
	I1217 20:33:55.822364  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:55.822753  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:55.822813  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:56.322272  522827 type.go:168] "Request Body" body=""
	I1217 20:33:56.322351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:56.322670  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:56.822314  522827 type.go:168] "Request Body" body=""
	I1217 20:33:56.822391  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:56.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:57.322710  522827 type.go:168] "Request Body" body=""
	I1217 20:33:57.322780  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:57.323117  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:57.822916  522827 type.go:168] "Request Body" body=""
	I1217 20:33:57.823001  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:57.823366  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:57.823421  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:58.323148  522827 type.go:168] "Request Body" body=""
	I1217 20:33:58.323218  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:58.323513  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:58.822212  522827 type.go:168] "Request Body" body=""
	I1217 20:33:58.822296  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:58.822656  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:59.322223  522827 type.go:168] "Request Body" body=""
	I1217 20:33:59.322305  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:59.322651  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:59.822221  522827 type.go:168] "Request Body" body=""
	I1217 20:33:59.822297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:59.822641  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:00.322298  522827 type.go:168] "Request Body" body=""
	I1217 20:34:00.322392  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:00.322726  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:00.322782  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:00.822577  522827 type.go:168] "Request Body" body=""
	I1217 20:34:00.822662  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:00.823038  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:01.322657  522827 type.go:168] "Request Body" body=""
	I1217 20:34:01.322731  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:01.323025  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:01.822880  522827 type.go:168] "Request Body" body=""
	I1217 20:34:01.822955  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:01.823320  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:02.323040  522827 type.go:168] "Request Body" body=""
	I1217 20:34:02.323124  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:02.323461  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:02.323514  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:02.822183  522827 type.go:168] "Request Body" body=""
	I1217 20:34:02.822254  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:02.822522  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:03.322245  522827 type.go:168] "Request Body" body=""
	I1217 20:34:03.322323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:03.322656  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:03.822270  522827 type.go:168] "Request Body" body=""
	I1217 20:34:03.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:03.822703  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:04.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:34:04.322351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:04.322622  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:04.822245  522827 type.go:168] "Request Body" body=""
	I1217 20:34:04.822344  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:04.822655  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:04.822707  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:05.322405  522827 type.go:168] "Request Body" body=""
	I1217 20:34:05.322482  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:05.322821  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:05.822296  522827 type.go:168] "Request Body" body=""
	I1217 20:34:05.822365  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:05.822641  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:06.322257  522827 type.go:168] "Request Body" body=""
	I1217 20:34:06.322357  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:06.322688  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:06.822277  522827 type.go:168] "Request Body" body=""
	I1217 20:34:06.822353  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:06.822641  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:07.322615  522827 type.go:168] "Request Body" body=""
	I1217 20:34:07.322701  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:07.322998  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:07.323048  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:07.822861  522827 type.go:168] "Request Body" body=""
	I1217 20:34:07.822938  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:07.823293  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:08.323117  522827 type.go:168] "Request Body" body=""
	I1217 20:34:08.323193  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:08.323537  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:08.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:34:08.822303  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:08.822638  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:09.322217  522827 type.go:168] "Request Body" body=""
	I1217 20:34:09.322290  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:09.322637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:09.822237  522827 type.go:168] "Request Body" body=""
	I1217 20:34:09.822317  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:09.822642  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:09.822697  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:10.322196  522827 type.go:168] "Request Body" body=""
	I1217 20:34:10.322271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:10.322575  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:10.822218  522827 type.go:168] "Request Body" body=""
	I1217 20:34:10.822302  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:10.822644  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:11.322351  522827 type.go:168] "Request Body" body=""
	I1217 20:34:11.322431  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:11.322804  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:11.822287  522827 type.go:168] "Request Body" body=""
	I1217 20:34:11.822357  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:11.822618  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:12.322611  522827 type.go:168] "Request Body" body=""
	I1217 20:34:12.322687  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:12.323025  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:12.323091  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:12.822902  522827 type.go:168] "Request Body" body=""
	I1217 20:34:12.822982  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:12.823336  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:13.323079  522827 type.go:168] "Request Body" body=""
	I1217 20:34:13.323153  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:13.323408  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:13.822161  522827 type.go:168] "Request Body" body=""
	I1217 20:34:13.822240  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:13.822575  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:14.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:34:14.322308  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:14.322650  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:14.822224  522827 type.go:168] "Request Body" body=""
	I1217 20:34:14.822298  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:14.822572  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:14.822622  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:15.322292  522827 type.go:168] "Request Body" body=""
	I1217 20:34:15.322381  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:15.322726  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:15.822430  522827 type.go:168] "Request Body" body=""
	I1217 20:34:15.822518  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:15.822853  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:16.322471  522827 type.go:168] "Request Body" body=""
	I1217 20:34:16.322546  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:16.322836  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:16.822523  522827 type.go:168] "Request Body" body=""
	I1217 20:34:16.822605  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:16.822901  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:16.822951  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:17.322790  522827 type.go:168] "Request Body" body=""
	I1217 20:34:17.322869  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:17.323207  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:17.822955  522827 type.go:168] "Request Body" body=""
	I1217 20:34:17.823029  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:17.823314  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:18.323135  522827 type.go:168] "Request Body" body=""
	I1217 20:34:18.323209  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:18.323507  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:18.822255  522827 type.go:168] "Request Body" body=""
	I1217 20:34:18.822334  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:18.822699  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:19.322387  522827 type.go:168] "Request Body" body=""
	I1217 20:34:19.322457  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:19.322762  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:19.322824  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:19.822236  522827 type.go:168] "Request Body" body=""
	I1217 20:34:19.822309  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:19.822629  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:20.322246  522827 type.go:168] "Request Body" body=""
	I1217 20:34:20.322329  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:20.322668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:20.822198  522827 type.go:168] "Request Body" body=""
	I1217 20:34:20.822275  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:20.822590  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:21.322284  522827 type.go:168] "Request Body" body=""
	I1217 20:34:21.322362  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:21.322710  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:21.822303  522827 type.go:168] "Request Body" body=""
	I1217 20:34:21.822382  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:21.822717  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:21.822772  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:22.322546  522827 type.go:168] "Request Body" body=""
	I1217 20:34:22.322615  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:22.322869  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:22.822850  522827 type.go:168] "Request Body" body=""
	I1217 20:34:22.822926  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:22.823275  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:23.323068  522827 type.go:168] "Request Body" body=""
	I1217 20:34:23.323142  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:23.323472  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:23.822173  522827 type.go:168] "Request Body" body=""
	I1217 20:34:23.822252  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:23.822565  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:24.322250  522827 type.go:168] "Request Body" body=""
	I1217 20:34:24.322333  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:24.322684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:24.322736  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:24.822303  522827 type.go:168] "Request Body" body=""
	I1217 20:34:24.822394  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:24.822738  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:25.322430  522827 type.go:168] "Request Body" body=""
	I1217 20:34:25.322506  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:25.322760  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:25.822245  522827 type.go:168] "Request Body" body=""
	I1217 20:34:25.822324  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:25.822671  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:26.322262  522827 type.go:168] "Request Body" body=""
	I1217 20:34:26.322344  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:26.322685  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:26.822350  522827 type.go:168] "Request Body" body=""
	I1217 20:34:26.822425  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:26.822723  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:26.822775  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:27.322731  522827 type.go:168] "Request Body" body=""
	I1217 20:34:27.322805  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:27.323135  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:27.822789  522827 type.go:168] "Request Body" body=""
	I1217 20:34:27.822869  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:27.823223  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:28.323014  522827 type.go:168] "Request Body" body=""
	I1217 20:34:28.323092  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:28.323358  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:28.823134  522827 type.go:168] "Request Body" body=""
	I1217 20:34:28.823222  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:28.823569  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:28.823650  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:29.322221  522827 type.go:168] "Request Body" body=""
	I1217 20:34:29.322302  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:29.322620  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:29.822198  522827 type.go:168] "Request Body" body=""
	I1217 20:34:29.822278  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:29.822544  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:30.322232  522827 type.go:168] "Request Body" body=""
	I1217 20:34:30.322310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:30.322633  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:30.822346  522827 type.go:168] "Request Body" body=""
	I1217 20:34:30.822427  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:30.822767  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:31.322434  522827 type.go:168] "Request Body" body=""
	I1217 20:34:31.322509  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:31.322812  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:31.322864  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:31.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:34:31.822308  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:31.822637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:32.322630  522827 type.go:168] "Request Body" body=""
	I1217 20:34:32.322703  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:32.323039  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:32.822905  522827 type.go:168] "Request Body" body=""
	I1217 20:34:32.822987  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:32.823335  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:33.323139  522827 type.go:168] "Request Body" body=""
	I1217 20:34:33.323215  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:33.323560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:33.323633  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:33.822242  522827 type.go:168] "Request Body" body=""
	I1217 20:34:33.822322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:33.822657  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:34.322213  522827 type.go:168] "Request Body" body=""
	I1217 20:34:34.322306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:34.322645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:34.822274  522827 type.go:168] "Request Body" body=""
	I1217 20:34:34.822351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:34.822676  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:35.322402  522827 type.go:168] "Request Body" body=""
	I1217 20:34:35.322487  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:35.322839  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:35.822515  522827 type.go:168] "Request Body" body=""
	I1217 20:34:35.822590  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:35.822930  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:35.822983  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:36.322255  522827 type.go:168] "Request Body" body=""
	I1217 20:34:36.322336  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:36.322707  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:36.822275  522827 type.go:168] "Request Body" body=""
	I1217 20:34:36.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:36.822697  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:37.322527  522827 type.go:168] "Request Body" body=""
	I1217 20:34:37.322599  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:37.322871  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:37.822227  522827 type.go:168] "Request Body" body=""
	I1217 20:34:37.822302  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:37.822644  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:38.322240  522827 type.go:168] "Request Body" body=""
	I1217 20:34:38.322315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:38.322686  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:38.322744  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:38.822377  522827 type.go:168] "Request Body" body=""
	I1217 20:34:38.822445  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:38.822700  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:39.322353  522827 type.go:168] "Request Body" body=""
	I1217 20:34:39.322436  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:39.322776  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:39.822486  522827 type.go:168] "Request Body" body=""
	I1217 20:34:39.822576  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:39.822923  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:40.322215  522827 type.go:168] "Request Body" body=""
	I1217 20:34:40.322285  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:40.322627  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:40.822320  522827 type.go:168] "Request Body" body=""
	I1217 20:34:40.822392  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:40.822751  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:40.822813  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:41.322501  522827 type.go:168] "Request Body" body=""
	I1217 20:34:41.322580  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:41.322864  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:41.822296  522827 type.go:168] "Request Body" body=""
	I1217 20:34:41.822379  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:41.822641  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:42.322620  522827 type.go:168] "Request Body" body=""
	I1217 20:34:42.322699  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:42.323049  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:42.822854  522827 type.go:168] "Request Body" body=""
	I1217 20:34:42.822937  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:42.823298  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:42.823352  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:43.322922  522827 type.go:168] "Request Body" body=""
	I1217 20:34:43.322997  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:43.323438  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:43.822136  522827 type.go:168] "Request Body" body=""
	I1217 20:34:43.822214  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:43.822552  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:44.322254  522827 type.go:168] "Request Body" body=""
	I1217 20:34:44.322328  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:44.322684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:44.822377  522827 type.go:168] "Request Body" body=""
	I1217 20:34:44.822446  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:44.822707  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:45.322396  522827 type.go:168] "Request Body" body=""
	I1217 20:34:45.322477  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:45.322826  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:45.322884  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:45.822533  522827 type.go:168] "Request Body" body=""
	I1217 20:34:45.822614  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:45.822967  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:46.322723  522827 type.go:168] "Request Body" body=""
	I1217 20:34:46.322799  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:46.323071  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:46.822878  522827 type.go:168] "Request Body" body=""
	I1217 20:34:46.822963  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:46.823309  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:47.322193  522827 type.go:168] "Request Body" body=""
	I1217 20:34:47.322271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:47.322594  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:47.822176  522827 type.go:168] "Request Body" body=""
	I1217 20:34:47.822253  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:47.822576  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:47.822624  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:48.322276  522827 type.go:168] "Request Body" body=""
	I1217 20:34:48.322360  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:48.322716  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:48.822204  522827 type.go:168] "Request Body" body=""
	I1217 20:34:48.822283  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:48.822652  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:49.322200  522827 type.go:168] "Request Body" body=""
	I1217 20:34:49.322273  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:49.322585  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:49.822198  522827 type.go:168] "Request Body" body=""
	I1217 20:34:49.822275  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:49.822635  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:49.822689  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:50.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:34:50.322310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:50.322638  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:50.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:34:50.822275  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:50.822586  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:51.322215  522827 type.go:168] "Request Body" body=""
	I1217 20:34:51.322292  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:51.322632  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:51.822330  522827 type.go:168] "Request Body" body=""
	I1217 20:34:51.822415  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:51.822753  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:51.822806  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:52.322585  522827 type.go:168] "Request Body" body=""
	I1217 20:34:52.322659  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:52.322934  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:52.822902  522827 type.go:168] "Request Body" body=""
	I1217 20:34:52.822975  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:52.823296  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:53.323063  522827 type.go:168] "Request Body" body=""
	I1217 20:34:53.323136  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:53.323470  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:53.822150  522827 type.go:168] "Request Body" body=""
	I1217 20:34:53.822229  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:53.822559  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:54.322241  522827 type.go:168] "Request Body" body=""
	I1217 20:34:54.322323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:54.322670  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:54.322729  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:54.822224  522827 type.go:168] "Request Body" body=""
	I1217 20:34:54.822309  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:54.822652  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:55.322322  522827 type.go:168] "Request Body" body=""
	I1217 20:34:55.322399  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:55.322652  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:55.822320  522827 type.go:168] "Request Body" body=""
	I1217 20:34:55.822399  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:55.822745  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:56.322224  522827 type.go:168] "Request Body" body=""
	I1217 20:34:56.322302  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:56.322656  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:56.822139  522827 type.go:168] "Request Body" body=""
	I1217 20:34:56.822217  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:56.822522  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:56.822571  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:57.322502  522827 type.go:168] "Request Body" body=""
	I1217 20:34:57.322575  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:57.322903  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:57.822521  522827 type.go:168] "Request Body" body=""
	I1217 20:34:57.822594  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:57.822915  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:58.322195  522827 type.go:168] "Request Body" body=""
	I1217 20:34:58.322269  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:58.322623  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:58.822280  522827 type.go:168] "Request Body" body=""
	I1217 20:34:58.822374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:58.822695  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:58.822745  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:59.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:34:59.322309  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:59.322669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:59.822371  522827 type.go:168] "Request Body" body=""
	I1217 20:34:59.822442  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:59.822756  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:00.322497  522827 type.go:168] "Request Body" body=""
	I1217 20:35:00.322580  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:00.322949  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:00.822996  522827 type.go:168] "Request Body" body=""
	I1217 20:35:00.823083  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:00.823467  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:00.823521  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:01.322212  522827 type.go:168] "Request Body" body=""
	I1217 20:35:01.322285  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:01.322553  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:01.822229  522827 type.go:168] "Request Body" body=""
	I1217 20:35:01.822306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:01.822627  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:02.322626  522827 type.go:168] "Request Body" body=""
	I1217 20:35:02.322709  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:02.323066  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:02.822977  522827 type.go:168] "Request Body" body=""
	I1217 20:35:02.823069  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:02.823348  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:03.323127  522827 type.go:168] "Request Body" body=""
	I1217 20:35:03.323211  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:03.323563  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:03.323642  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:03.822192  522827 type.go:168] "Request Body" body=""
	I1217 20:35:03.822280  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:03.822696  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:04.322196  522827 type.go:168] "Request Body" body=""
	I1217 20:35:04.322276  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:04.322589  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:04.822325  522827 type.go:168] "Request Body" body=""
	I1217 20:35:04.822408  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:04.822706  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:05.322377  522827 type.go:168] "Request Body" body=""
	I1217 20:35:05.322458  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:05.322803  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:05.822315  522827 type.go:168] "Request Body" body=""
	I1217 20:35:05.822428  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:05.822690  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:05.822728  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:06.322269  522827 type.go:168] "Request Body" body=""
	I1217 20:35:06.322367  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:06.322690  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:06.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:35:06.822331  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:06.822614  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:07.322502  522827 type.go:168] "Request Body" body=""
	I1217 20:35:07.322573  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:07.322817  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:07.822274  522827 type.go:168] "Request Body" body=""
	I1217 20:35:07.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:07.822698  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:07.822760  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:08.322442  522827 type.go:168] "Request Body" body=""
	I1217 20:35:08.322522  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:08.322845  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:08.822222  522827 type.go:168] "Request Body" body=""
	I1217 20:35:08.822296  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:08.822597  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:09.322251  522827 type.go:168] "Request Body" body=""
	I1217 20:35:09.322329  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:09.322650  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:09.822333  522827 type.go:168] "Request Body" body=""
	I1217 20:35:09.822408  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:09.822762  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:09.822817  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:10.322442  522827 type.go:168] "Request Body" body=""
	I1217 20:35:10.322538  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:10.322820  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:10.822275  522827 type.go:168] "Request Body" body=""
	I1217 20:35:10.822347  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:10.822634  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:11.322386  522827 type.go:168] "Request Body" body=""
	I1217 20:35:11.322464  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:11.322764  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:11.822226  522827 type.go:168] "Request Body" body=""
	I1217 20:35:11.822297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:11.822669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:12.322679  522827 type.go:168] "Request Body" body=""
	I1217 20:35:12.322763  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:12.323067  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:12.323113  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:12.822935  522827 type.go:168] "Request Body" body=""
	I1217 20:35:12.823024  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:12.823355  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:13.323128  522827 type.go:168] "Request Body" body=""
	I1217 20:35:13.323210  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:13.323540  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:13.822246  522827 type.go:168] "Request Body" body=""
	I1217 20:35:13.822355  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:13.822636  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:14.322330  522827 type.go:168] "Request Body" body=""
	I1217 20:35:14.322406  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:14.322743  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:14.822304  522827 type.go:168] "Request Body" body=""
	I1217 20:35:14.822372  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:14.822645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:14.822685  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:15.322347  522827 type.go:168] "Request Body" body=""
	I1217 20:35:15.322423  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:15.322800  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:15.822252  522827 type.go:168] "Request Body" body=""
	I1217 20:35:15.822332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:15.822662  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:16.322191  522827 type.go:168] "Request Body" body=""
	I1217 20:35:16.322260  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:16.322568  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:16.822255  522827 type.go:168] "Request Body" body=""
	I1217 20:35:16.822332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:16.822681  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:16.822743  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:17.322820  522827 type.go:168] "Request Body" body=""
	I1217 20:35:17.322896  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:17.323309  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:17.823040  522827 type.go:168] "Request Body" body=""
	I1217 20:35:17.823109  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:17.823374  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:18.323149  522827 type.go:168] "Request Body" body=""
	I1217 20:35:18.323236  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:18.323572  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:18.822287  522827 type.go:168] "Request Body" body=""
	I1217 20:35:18.822373  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:18.822708  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:18.822767  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:19.322441  522827 type.go:168] "Request Body" body=""
	I1217 20:35:19.322515  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:19.322786  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:19.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:35:19.822312  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:19.822602  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:20.322257  522827 type.go:168] "Request Body" body=""
	I1217 20:35:20.322333  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:20.322679  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:20.822345  522827 type.go:168] "Request Body" body=""
	I1217 20:35:20.822415  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:20.822676  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:21.322243  522827 type.go:168] "Request Body" body=""
	I1217 20:35:21.322326  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:21.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:21.322713  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:21.822270  522827 type.go:168] "Request Body" body=""
	I1217 20:35:21.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:21.822667  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:22.322750  522827 type.go:168] "Request Body" body=""
	I1217 20:35:22.322821  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:22.323094  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:22.823051  522827 type.go:168] "Request Body" body=""
	I1217 20:35:22.823129  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:22.823477  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:23.322217  522827 type.go:168] "Request Body" body=""
	I1217 20:35:23.322297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:23.322625  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:23.822303  522827 type.go:168] "Request Body" body=""
	I1217 20:35:23.822373  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:23.822637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:23.822680  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:24.322343  522827 type.go:168] "Request Body" body=""
	I1217 20:35:24.322422  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:24.322779  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:24.822483  522827 type.go:168] "Request Body" body=""
	I1217 20:35:24.822568  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:24.822893  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:25.322199  522827 type.go:168] "Request Body" body=""
	I1217 20:35:25.322274  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:25.322559  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:25.822235  522827 type.go:168] "Request Body" body=""
	I1217 20:35:25.822320  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:25.822638  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:26.322257  522827 type.go:168] "Request Body" body=""
	I1217 20:35:26.322337  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:26.322663  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:26.322718  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:26.822248  522827 type.go:168] "Request Body" body=""
	I1217 20:35:26.822315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:26.822587  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:27.322563  522827 type.go:168] "Request Body" body=""
	I1217 20:35:27.322640  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:27.322979  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:27.822237  522827 type.go:168] "Request Body" body=""
	I1217 20:35:27.822313  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:27.822672  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:28.322358  522827 type.go:168] "Request Body" body=""
	I1217 20:35:28.322427  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:28.322726  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:28.322768  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:28.822428  522827 type.go:168] "Request Body" body=""
	I1217 20:35:28.822502  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:28.822834  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:29.322244  522827 type.go:168] "Request Body" body=""
	I1217 20:35:29.322327  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:29.322664  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:29.822221  522827 type.go:168] "Request Body" body=""
	I1217 20:35:29.822293  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:29.822604  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:30.322244  522827 type.go:168] "Request Body" body=""
	I1217 20:35:30.322319  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:30.322684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:30.822264  522827 type.go:168] "Request Body" body=""
	I1217 20:35:30.822339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:30.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:30.822715  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:31.322385  522827 type.go:168] "Request Body" body=""
	I1217 20:35:31.322460  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:31.322798  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:31.822531  522827 type.go:168] "Request Body" body=""
	I1217 20:35:31.822610  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:31.822946  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:32.322713  522827 type.go:168] "Request Body" body=""
	I1217 20:35:32.322793  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:32.323145  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:32.822950  522827 type.go:168] "Request Body" body=""
	I1217 20:35:32.823025  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:32.823278  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:32.823318  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:33.323110  522827 type.go:168] "Request Body" body=""
	I1217 20:35:33.323192  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:33.323540  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:33.822246  522827 type.go:168] "Request Body" body=""
	I1217 20:35:33.822323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:33.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:34.322218  522827 type.go:168] "Request Body" body=""
	I1217 20:35:34.322300  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:34.322661  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:34.822268  522827 type.go:168] "Request Body" body=""
	I1217 20:35:34.822347  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:34.822680  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:35.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:35:35.322307  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:35.322640  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:35.322702  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:35.822209  522827 type.go:168] "Request Body" body=""
	I1217 20:35:35.822278  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:35.822595  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:36.322264  522827 type.go:168] "Request Body" body=""
	I1217 20:35:36.322343  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:36.322666  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:36.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:35:36.822306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:36.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:37.322496  522827 type.go:168] "Request Body" body=""
	I1217 20:35:37.322571  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:37.322824  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:37.322862  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:37.822509  522827 type.go:168] "Request Body" body=""
	I1217 20:35:37.822586  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:37.822928  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:38.322513  522827 type.go:168] "Request Body" body=""
	I1217 20:35:38.322595  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:38.323137  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:38.822886  522827 type.go:168] "Request Body" body=""
	I1217 20:35:38.822959  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:38.823295  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:39.323106  522827 type.go:168] "Request Body" body=""
	I1217 20:35:39.323188  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:39.323560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:39.323633  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:39.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:35:39.822276  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:39.822619  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:40.322173  522827 type.go:168] "Request Body" body=""
	I1217 20:35:40.322246  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:40.322545  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:40.822242  522827 type.go:168] "Request Body" body=""
	I1217 20:35:40.822317  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:40.822754  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:41.322470  522827 type.go:168] "Request Body" body=""
	I1217 20:35:41.322556  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:41.322901  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:41.822209  522827 type.go:168] "Request Body" body=""
	I1217 20:35:41.822282  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:41.822536  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:41.822583  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:42.322519  522827 type.go:168] "Request Body" body=""
	I1217 20:35:42.322603  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:42.322998  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:42.822247  522827 type.go:168] "Request Body" body=""
	I1217 20:35:42.822336  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:42.822693  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:43.322188  522827 type.go:168] "Request Body" body=""
	I1217 20:35:43.322249  522827 node_ready.go:38] duration metric: took 6m0.000239045s for node "functional-655452" to be "Ready" ...
	I1217 20:35:43.325291  522827 out.go:203] 
	W1217 20:35:43.328188  522827 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 20:35:43.328206  522827 out.go:285] * 
	W1217 20:35:43.330331  522827 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:35:43.333111  522827 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.480906996Z" level=info msg="Using the internal default seccomp profile"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.480914348Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.480920182Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.480926287Z" level=info msg="RDT not available in the host system"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.480942771Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.481715201Z" level=info msg="Conmon does support the --sync option"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.481735672Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.481751648Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.482467921Z" level=info msg="Conmon does support the --sync option"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.482494301Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.482628103Z" level=info msg="Updated default CNI network name to "
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.483177475Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.483642644Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.48371238Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.540730063Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.540938081Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.54099651Z" level=info msg="Create NRI interface"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.541126464Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.541145295Z" level=info msg="runtime interface created"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.541159761Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.541167408Z" level=info msg="runtime interface starting up..."
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.541173546Z" level=info msg="starting plugins..."
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.541188307Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 20:29:40 functional-655452 crio[5447]: time="2025-12-17T20:29:40.541273649Z" level=info msg="No systemd watchdog enabled"
	Dec 17 20:29:40 functional-655452 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:35:47.931393    8833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:35:47.931984    8833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:35:47.933490    8833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:35:47.933951    8833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:35:47.935558    8833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 19:02] hrtimer: interrupt took 71042327 ns
	[Dec17 20:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 20:12] overlayfs: idmapped layers are currently not supported
	[  +0.080288] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec17 20:17] overlayfs: idmapped layers are currently not supported
	[Dec17 20:18] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:35:47 up  3:18,  0 user,  load average: 0.56, 0.35, 0.91
	Linux functional-655452 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 20:35:45 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:35:46 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1140.
	Dec 17 20:35:46 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:46 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:46 functional-655452 kubelet[8712]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:46 functional-655452 kubelet[8712]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:46 functional-655452 kubelet[8712]: E1217 20:35:46.391805    8712 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:35:46 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:35:46 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:35:47 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1141.
	Dec 17 20:35:47 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:47 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:47 functional-655452 kubelet[8745]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:47 functional-655452 kubelet[8745]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:47 functional-655452 kubelet[8745]: E1217 20:35:47.159903    8745 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:35:47 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:35:47 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:35:47 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1142.
	Dec 17 20:35:47 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:47 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:47 functional-655452 kubelet[8817]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:47 functional-655452 kubelet[8817]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:47 functional-655452 kubelet[8817]: E1217 20:35:47.884568    8817 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:35:47 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:35:47 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452: exit status 2 (333.833745ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-655452" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (2.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (2.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 kubectl -- --context functional-655452 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-655452 kubectl -- --context functional-655452 get pods: exit status 1 (122.504994ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-655452 kubectl -- --context functional-655452 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-655452
helpers_test.go:244: (dbg) docker inspect functional-655452:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	        "Created": "2025-12-17T20:21:23.127111799Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 517339,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:21:23.205943598Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hosts",
	        "LogPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f-json.log",
	        "Name": "/functional-655452",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-655452:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-655452",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	                "LowerDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8-init/diff:/var/lib/docker/overlay2/c4759b344ecb109c83d66e4ef56c76903f1d1e597efb86ab2d2c7911b1130a8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-655452",
	                "Source": "/var/lib/docker/volumes/functional-655452/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-655452",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-655452",
	                "name.minikube.sigs.k8s.io": "functional-655452",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "77ae7dbcf69b3201be4ce65a88d235dbad078142318e01a5939425f9766fb924",
	            "SandboxKey": "/var/run/docker/netns/77ae7dbcf69b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-655452": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:c6:98:b5:58:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b68cd6e69ccdb81b0870dd976e2d5401ca696480842d2c469293121d59435cfb",
	                    "EndpointID": "82926bb4b08b5f66131c1026a8bc72a582e9cb8c6d97a278a494965923f47cb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-655452",
	                        "ce7a6a54d14f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452: exit status 2 (306.745851ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-655452 logs -n 25: (1.021181201s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-643319 image ls --format short --alsologtostderr                                                                                     │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image   │ functional-643319 image ls --format yaml --alsologtostderr                                                                                      │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ ssh     │ functional-643319 ssh pgrep buildkitd                                                                                                           │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ image   │ functional-643319 image build -t localhost/my-image:functional-643319 testdata/build --alsologtostderr                                          │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image   │ functional-643319 image ls --format json --alsologtostderr                                                                                      │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image   │ functional-643319 image ls --format table --alsologtostderr                                                                                     │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image   │ functional-643319 image ls                                                                                                                      │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ delete  │ -p functional-643319                                                                                                                            │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ start   │ -p functional-655452 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ start   │ -p functional-655452 --alsologtostderr -v=8                                                                                                     │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:29 UTC │                     │
	│ cache   │ functional-655452 cache add registry.k8s.io/pause:3.1                                                                                           │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ functional-655452 cache add registry.k8s.io/pause:3.3                                                                                           │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ functional-655452 cache add registry.k8s.io/pause:latest                                                                                        │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ functional-655452 cache add minikube-local-cache-test:functional-655452                                                                         │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ functional-655452 cache delete minikube-local-cache-test:functional-655452                                                                      │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ list                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ ssh     │ functional-655452 ssh sudo crictl images                                                                                                        │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ ssh     │ functional-655452 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                              │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ ssh     │ functional-655452 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │                     │
	│ cache   │ functional-655452 cache reload                                                                                                                  │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ ssh     │ functional-655452 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ kubectl │ functional-655452 kubectl -- --context functional-655452 get pods                                                                               │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:29:37
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:29:37.230217  522827 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:29:37.230338  522827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:29:37.230348  522827 out.go:374] Setting ErrFile to fd 2...
	I1217 20:29:37.230354  522827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:29:37.230641  522827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:29:37.231040  522827 out.go:368] Setting JSON to false
	I1217 20:29:37.231956  522827 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11527,"bootTime":1765991851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:29:37.232033  522827 start.go:143] virtualization:  
	I1217 20:29:37.235360  522827 out.go:179] * [functional-655452] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:29:37.239166  522827 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:29:37.239533  522827 notify.go:221] Checking for updates...
	I1217 20:29:37.245507  522827 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:29:37.248369  522827 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:37.251209  522827 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:29:37.254179  522827 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:29:37.257129  522827 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:29:37.260562  522827 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:29:37.260726  522827 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:29:37.289208  522827 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:29:37.289391  522827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:29:37.344995  522827 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 20:29:37.33566048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:29:37.345107  522827 docker.go:319] overlay module found
	I1217 20:29:37.348246  522827 out.go:179] * Using the docker driver based on existing profile
	I1217 20:29:37.351193  522827 start.go:309] selected driver: docker
	I1217 20:29:37.351220  522827 start.go:927] validating driver "docker" against &{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:29:37.351378  522827 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:29:37.351479  522827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:29:37.406404  522827 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 20:29:37.397152083 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:29:37.406839  522827 cni.go:84] Creating CNI manager for ""
	I1217 20:29:37.406903  522827 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:29:37.406958  522827 start.go:353] cluster config:
	{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:29:37.410074  522827 out.go:179] * Starting "functional-655452" primary control-plane node in "functional-655452" cluster
	I1217 20:29:37.413044  522827 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:29:37.415960  522827 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:29:37.418922  522827 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:29:37.418997  522827 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1217 20:29:37.419012  522827 cache.go:65] Caching tarball of preloaded images
	I1217 20:29:37.419028  522827 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:29:37.419099  522827 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:29:37.419110  522827 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 20:29:37.419218  522827 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/config.json ...
	I1217 20:29:37.438883  522827 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:29:37.438908  522827 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:29:37.438929  522827 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:29:37.438964  522827 start.go:360] acquireMachinesLock for functional-655452: {Name:mk8b480106227149e1b79da3c4f580b9dc5e0f96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:29:37.439024  522827 start.go:364] duration metric: took 37.399µs to acquireMachinesLock for "functional-655452"
	I1217 20:29:37.439047  522827 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:29:37.439057  522827 fix.go:54] fixHost starting: 
	I1217 20:29:37.439341  522827 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:29:37.456072  522827 fix.go:112] recreateIfNeeded on functional-655452: state=Running err=<nil>
	W1217 20:29:37.456113  522827 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:29:37.459179  522827 out.go:252] * Updating the running docker "functional-655452" container ...
	I1217 20:29:37.459210  522827 machine.go:94] provisionDockerMachine start ...
	I1217 20:29:37.459290  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:37.476101  522827 main.go:143] libmachine: Using SSH client type: native
	I1217 20:29:37.476449  522827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:29:37.476466  522827 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:29:37.607148  522827 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-655452
	
	I1217 20:29:37.607176  522827 ubuntu.go:182] provisioning hostname "functional-655452"
	I1217 20:29:37.607253  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:37.625523  522827 main.go:143] libmachine: Using SSH client type: native
	I1217 20:29:37.625850  522827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:29:37.625869  522827 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-655452 && echo "functional-655452" | sudo tee /etc/hostname
	I1217 20:29:37.765012  522827 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-655452
	
	I1217 20:29:37.765095  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:37.783574  522827 main.go:143] libmachine: Using SSH client type: native
	I1217 20:29:37.784233  522827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:29:37.784256  522827 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-655452' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-655452/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-655452' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:29:37.923858  522827 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:29:37.923885  522827 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:29:37.923918  522827 ubuntu.go:190] setting up certificates
	I1217 20:29:37.923930  522827 provision.go:84] configureAuth start
	I1217 20:29:37.923995  522827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-655452
	I1217 20:29:37.942198  522827 provision.go:143] copyHostCerts
	I1217 20:29:37.942245  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:29:37.942294  522827 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:29:37.942308  522827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:29:37.942385  522827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:29:37.942483  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:29:37.942506  522827 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:29:37.942510  522827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:29:37.942538  522827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:29:37.942584  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:29:37.942605  522827 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:29:37.942613  522827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:29:37.942638  522827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:29:37.942696  522827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.functional-655452 san=[127.0.0.1 192.168.49.2 functional-655452 localhost minikube]
	I1217 20:29:38.205373  522827 provision.go:177] copyRemoteCerts
	I1217 20:29:38.205444  522827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:29:38.205488  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:38.222940  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:38.324557  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 20:29:38.324643  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:29:38.342369  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 20:29:38.342442  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 20:29:38.361702  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 20:29:38.361816  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:29:38.379229  522827 provision.go:87] duration metric: took 455.281269ms to configureAuth
	I1217 20:29:38.379306  522827 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:29:38.379506  522827 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:29:38.379650  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:38.397098  522827 main.go:143] libmachine: Using SSH client type: native
	I1217 20:29:38.397425  522827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:29:38.397449  522827 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:29:38.710104  522827 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:29:38.710129  522827 machine.go:97] duration metric: took 1.250909554s to provisionDockerMachine
	I1217 20:29:38.710141  522827 start.go:293] postStartSetup for "functional-655452" (driver="docker")
	I1217 20:29:38.710173  522827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:29:38.710243  522827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:29:38.710290  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:38.729105  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:38.823561  522827 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:29:38.826921  522827 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 20:29:38.826944  522827 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 20:29:38.826949  522827 command_runner.go:130] > VERSION_ID="12"
	I1217 20:29:38.826954  522827 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 20:29:38.826958  522827 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 20:29:38.826962  522827 command_runner.go:130] > ID=debian
	I1217 20:29:38.826966  522827 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 20:29:38.826971  522827 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 20:29:38.826976  522827 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 20:29:38.827033  522827 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:29:38.827056  522827 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:29:38.827068  522827 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:29:38.827127  522827 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:29:38.827213  522827 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:29:38.827224  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /etc/ssl/certs/4884122.pem
	I1217 20:29:38.827310  522827 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts -> hosts in /etc/test/nested/copy/488412
	I1217 20:29:38.827318  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts -> /etc/test/nested/copy/488412/hosts
	I1217 20:29:38.827361  522827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/488412
	I1217 20:29:38.835073  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:29:38.853051  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts --> /etc/test/nested/copy/488412/hosts (40 bytes)
	I1217 20:29:38.870277  522827 start.go:296] duration metric: took 160.119138ms for postStartSetup
	I1217 20:29:38.870416  522827 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:29:38.870497  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:38.887313  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:38.980667  522827 command_runner.go:130] > 14%
	I1217 20:29:38.980748  522827 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:29:38.985147  522827 command_runner.go:130] > 169G
	I1217 20:29:38.985687  522827 fix.go:56] duration metric: took 1.546626529s for fixHost
	I1217 20:29:38.985712  522827 start.go:83] releasing machines lock for "functional-655452", held for 1.546675825s
	I1217 20:29:38.985789  522827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-655452
	I1217 20:29:39.004882  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:29:39.004958  522827 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:29:39.004969  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:29:39.005005  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:29:39.005049  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:29:39.005073  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:29:39.005126  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:29:39.005177  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.005197  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.005217  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.005238  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:29:39.005294  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:39.023309  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:39.128919  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:29:39.146238  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:29:39.163663  522827 ssh_runner.go:195] Run: openssl version
	I1217 20:29:39.169395  522827 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 20:29:39.169821  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.177042  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:29:39.184227  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.187671  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.187835  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.187899  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.232645  522827 command_runner.go:130] > 51391683
	I1217 20:29:39.233156  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:29:39.240764  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.248070  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:29:39.256139  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.260468  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.260613  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.260717  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.301324  522827 command_runner.go:130] > 3ec20f2e
	I1217 20:29:39.301774  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:29:39.309564  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.316908  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:29:39.330430  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.334931  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.335647  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.335725  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.377554  522827 command_runner.go:130] > b5213941
	I1217 20:29:39.378955  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:29:39.389619  522827 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:29:39.393257  522827 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 20:29:39.396841  522827 ssh_runner.go:195] Run: cat /version.json
	I1217 20:29:39.396923  522827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:29:39.487006  522827 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1217 20:29:39.489563  522827 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 20:29:39.489734  522827 ssh_runner.go:195] Run: systemctl --version
	I1217 20:29:39.495686  522827 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 20:29:39.495789  522827 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 20:29:39.496199  522827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:29:39.531768  522827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 20:29:39.536045  522827 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 20:29:39.536498  522827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:29:39.536609  522827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:29:39.544584  522827 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:29:39.544609  522827 start.go:496] detecting cgroup driver to use...
	I1217 20:29:39.544639  522827 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:29:39.544686  522827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:29:39.559677  522827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:29:39.572537  522827 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:29:39.572629  522827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:29:39.588063  522827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:29:39.601417  522827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:29:39.711338  522827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:29:39.828534  522827 docker.go:234] disabling docker service ...
	I1217 20:29:39.828602  522827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:29:39.843450  522827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:29:39.856661  522827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:29:39.988443  522827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:29:40.133139  522827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:29:40.147217  522827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:29:40.161697  522827 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1217 20:29:40.163096  522827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:29:40.163182  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.173178  522827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:29:40.173338  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.182803  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.192168  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.201463  522827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:29:40.209602  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.218600  522827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.227088  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.236327  522827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:29:40.243154  522827 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 20:29:40.244193  522827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:29:40.251635  522827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:29:40.361488  522827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:29:40.546740  522827 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:29:40.546847  522827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:29:40.551021  522827 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1217 20:29:40.551089  522827 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 20:29:40.551102  522827 command_runner.go:130] > Device: 0,72	Inode: 1636        Links: 1
	I1217 20:29:40.551127  522827 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 20:29:40.551137  522827 command_runner.go:130] > Access: 2025-12-17 20:29:40.478294705 +0000
	I1217 20:29:40.551143  522827 command_runner.go:130] > Modify: 2025-12-17 20:29:40.478294705 +0000
	I1217 20:29:40.551149  522827 command_runner.go:130] > Change: 2025-12-17 20:29:40.478294705 +0000
	I1217 20:29:40.551152  522827 command_runner.go:130] >  Birth: -
	I1217 20:29:40.551189  522827 start.go:564] Will wait 60s for crictl version
	I1217 20:29:40.551247  522827 ssh_runner.go:195] Run: which crictl
	I1217 20:29:40.554786  522827 command_runner.go:130] > /usr/local/bin/crictl
	I1217 20:29:40.554923  522827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:29:40.577444  522827 command_runner.go:130] > Version:  0.1.0
	I1217 20:29:40.577470  522827 command_runner.go:130] > RuntimeName:  cri-o
	I1217 20:29:40.577476  522827 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1217 20:29:40.577491  522827 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 20:29:40.579694  522827 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:29:40.579819  522827 ssh_runner.go:195] Run: crio --version
	I1217 20:29:40.609324  522827 command_runner.go:130] > crio version 1.34.3
	I1217 20:29:40.609350  522827 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1217 20:29:40.609357  522827 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1217 20:29:40.609362  522827 command_runner.go:130] >    GitTreeState:   dirty
	I1217 20:29:40.609367  522827 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1217 20:29:40.609371  522827 command_runner.go:130] >    GoVersion:      go1.24.6
	I1217 20:29:40.609375  522827 command_runner.go:130] >    Compiler:       gc
	I1217 20:29:40.609382  522827 command_runner.go:130] >    Platform:       linux/arm64
	I1217 20:29:40.609386  522827 command_runner.go:130] >    Linkmode:       static
	I1217 20:29:40.609390  522827 command_runner.go:130] >    BuildTags:
	I1217 20:29:40.609393  522827 command_runner.go:130] >      static
	I1217 20:29:40.609397  522827 command_runner.go:130] >      netgo
	I1217 20:29:40.609401  522827 command_runner.go:130] >      osusergo
	I1217 20:29:40.609410  522827 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1217 20:29:40.609414  522827 command_runner.go:130] >      seccomp
	I1217 20:29:40.609421  522827 command_runner.go:130] >      apparmor
	I1217 20:29:40.609424  522827 command_runner.go:130] >      selinux
	I1217 20:29:40.609429  522827 command_runner.go:130] >    LDFlags:          unknown
	I1217 20:29:40.609433  522827 command_runner.go:130] >    SeccompEnabled:   true
	I1217 20:29:40.609441  522827 command_runner.go:130] >    AppArmorEnabled:  false
	I1217 20:29:40.609527  522827 ssh_runner.go:195] Run: crio --version
	I1217 20:29:40.638467  522827 command_runner.go:130] > crio version 1.34.3
	I1217 20:29:40.638491  522827 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1217 20:29:40.638499  522827 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1217 20:29:40.638505  522827 command_runner.go:130] >    GitTreeState:   dirty
	I1217 20:29:40.638509  522827 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1217 20:29:40.638516  522827 command_runner.go:130] >    GoVersion:      go1.24.6
	I1217 20:29:40.638520  522827 command_runner.go:130] >    Compiler:       gc
	I1217 20:29:40.638533  522827 command_runner.go:130] >    Platform:       linux/arm64
	I1217 20:29:40.638543  522827 command_runner.go:130] >    Linkmode:       static
	I1217 20:29:40.638547  522827 command_runner.go:130] >    BuildTags:
	I1217 20:29:40.638550  522827 command_runner.go:130] >      static
	I1217 20:29:40.638554  522827 command_runner.go:130] >      netgo
	I1217 20:29:40.638558  522827 command_runner.go:130] >      osusergo
	I1217 20:29:40.638568  522827 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1217 20:29:40.638572  522827 command_runner.go:130] >      seccomp
	I1217 20:29:40.638576  522827 command_runner.go:130] >      apparmor
	I1217 20:29:40.638583  522827 command_runner.go:130] >      selinux
	I1217 20:29:40.638587  522827 command_runner.go:130] >    LDFlags:          unknown
	I1217 20:29:40.638592  522827 command_runner.go:130] >    SeccompEnabled:   true
	I1217 20:29:40.638604  522827 command_runner.go:130] >    AppArmorEnabled:  false
	I1217 20:29:40.644077  522827 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 20:29:40.647046  522827 cli_runner.go:164] Run: docker network inspect functional-655452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:29:40.665190  522827 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:29:40.669398  522827 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1217 20:29:40.669593  522827 kubeadm.go:884] updating cluster {Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:29:40.669700  522827 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:29:40.669779  522827 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:29:40.704282  522827 command_runner.go:130] > {
	I1217 20:29:40.704302  522827 command_runner.go:130] >   "images":  [
	I1217 20:29:40.704307  522827 command_runner.go:130] >     {
	I1217 20:29:40.704316  522827 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 20:29:40.704321  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704328  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 20:29:40.704331  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704335  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704350  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1217 20:29:40.704362  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1217 20:29:40.704370  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704374  522827 command_runner.go:130] >       "size":  "111333938",
	I1217 20:29:40.704379  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704389  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704403  522827 command_runner.go:130] >     },
	I1217 20:29:40.704406  522827 command_runner.go:130] >     {
	I1217 20:29:40.704413  522827 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 20:29:40.704419  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704425  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 20:29:40.704429  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704433  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704445  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1217 20:29:40.704454  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 20:29:40.704460  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704464  522827 command_runner.go:130] >       "size":  "29037500",
	I1217 20:29:40.704468  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704476  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704482  522827 command_runner.go:130] >     },
	I1217 20:29:40.704485  522827 command_runner.go:130] >     {
	I1217 20:29:40.704494  522827 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 20:29:40.704503  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704509  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 20:29:40.704512  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704516  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704528  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1217 20:29:40.704536  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1217 20:29:40.704542  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704547  522827 command_runner.go:130] >       "size":  "74491780",
	I1217 20:29:40.704551  522827 command_runner.go:130] >       "username":  "nonroot",
	I1217 20:29:40.704556  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704561  522827 command_runner.go:130] >     },
	I1217 20:29:40.704568  522827 command_runner.go:130] >     {
	I1217 20:29:40.704579  522827 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1217 20:29:40.704583  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704588  522827 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1217 20:29:40.704594  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704598  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704605  522827 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890",
	I1217 20:29:40.704613  522827 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"
	I1217 20:29:40.704619  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704623  522827 command_runner.go:130] >       "size":  "60850387",
	I1217 20:29:40.704626  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.704630  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.704636  522827 command_runner.go:130] >       },
	I1217 20:29:40.704645  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704657  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704660  522827 command_runner.go:130] >     },
	I1217 20:29:40.704664  522827 command_runner.go:130] >     {
	I1217 20:29:40.704673  522827 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1217 20:29:40.704679  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704685  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1217 20:29:40.704689  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704693  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704704  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee",
	I1217 20:29:40.704721  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"
	I1217 20:29:40.704724  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704729  522827 command_runner.go:130] >       "size":  "85015535",
	I1217 20:29:40.704735  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.704739  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.704742  522827 command_runner.go:130] >       },
	I1217 20:29:40.704746  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704753  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704756  522827 command_runner.go:130] >     },
	I1217 20:29:40.704759  522827 command_runner.go:130] >     {
	I1217 20:29:40.704772  522827 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1217 20:29:40.704779  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704785  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1217 20:29:40.704788  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704793  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704803  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f",
	I1217 20:29:40.704813  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1217 20:29:40.704822  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704827  522827 command_runner.go:130] >       "size":  "72170325",
	I1217 20:29:40.704831  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.704835  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.704838  522827 command_runner.go:130] >       },
	I1217 20:29:40.704842  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704846  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704848  522827 command_runner.go:130] >     },
	I1217 20:29:40.704851  522827 command_runner.go:130] >     {
	I1217 20:29:40.704858  522827 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1217 20:29:40.704861  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704866  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1217 20:29:40.704870  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704875  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704883  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f",
	I1217 20:29:40.704894  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1217 20:29:40.704898  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704903  522827 command_runner.go:130] >       "size":  "74107287",
	I1217 20:29:40.704910  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704914  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704926  522827 command_runner.go:130] >     },
	I1217 20:29:40.704930  522827 command_runner.go:130] >     {
	I1217 20:29:40.704936  522827 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1217 20:29:40.704940  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704946  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1217 20:29:40.704949  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704963  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704975  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3",
	I1217 20:29:40.704993  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"
	I1217 20:29:40.705000  522827 command_runner.go:130] >       ],
	I1217 20:29:40.705005  522827 command_runner.go:130] >       "size":  "49822549",
	I1217 20:29:40.705008  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.705014  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.705017  522827 command_runner.go:130] >       },
	I1217 20:29:40.705025  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.705029  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.705033  522827 command_runner.go:130] >     },
	I1217 20:29:40.705036  522827 command_runner.go:130] >     {
	I1217 20:29:40.705043  522827 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 20:29:40.705055  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.705060  522827 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 20:29:40.705063  522827 command_runner.go:130] >       ],
	I1217 20:29:40.705068  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.705078  522827 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1217 20:29:40.705089  522827 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1217 20:29:40.705094  522827 command_runner.go:130] >       ],
	I1217 20:29:40.705097  522827 command_runner.go:130] >       "size":  "519884",
	I1217 20:29:40.705101  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.705108  522827 command_runner.go:130] >         "value":  "65535"
	I1217 20:29:40.705111  522827 command_runner.go:130] >       },
	I1217 20:29:40.705115  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.705119  522827 command_runner.go:130] >       "pinned":  true
	I1217 20:29:40.705128  522827 command_runner.go:130] >     }
	I1217 20:29:40.705133  522827 command_runner.go:130] >   ]
	I1217 20:29:40.705136  522827 command_runner.go:130] > }
	I1217 20:29:40.705310  522827 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:29:40.705323  522827 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:29:40.705384  522827 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:29:40.728606  522827 command_runner.go:130] > {
	I1217 20:29:40.728624  522827 command_runner.go:130] >   "images":  [
	I1217 20:29:40.728629  522827 command_runner.go:130] >     {
	I1217 20:29:40.728638  522827 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 20:29:40.728643  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728657  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 20:29:40.728665  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728669  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728678  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1217 20:29:40.728686  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1217 20:29:40.728689  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728694  522827 command_runner.go:130] >       "size":  "111333938",
	I1217 20:29:40.728698  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.728705  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728708  522827 command_runner.go:130] >     },
	I1217 20:29:40.728711  522827 command_runner.go:130] >     {
	I1217 20:29:40.728718  522827 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 20:29:40.728726  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728731  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 20:29:40.728735  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728739  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728747  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1217 20:29:40.728756  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 20:29:40.728759  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728763  522827 command_runner.go:130] >       "size":  "29037500",
	I1217 20:29:40.728767  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.728774  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728778  522827 command_runner.go:130] >     },
	I1217 20:29:40.728781  522827 command_runner.go:130] >     {
	I1217 20:29:40.728789  522827 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 20:29:40.728793  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728798  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 20:29:40.728801  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728805  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728813  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1217 20:29:40.728821  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1217 20:29:40.728824  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728829  522827 command_runner.go:130] >       "size":  "74491780",
	I1217 20:29:40.728833  522827 command_runner.go:130] >       "username":  "nonroot",
	I1217 20:29:40.728840  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728843  522827 command_runner.go:130] >     },
	I1217 20:29:40.728846  522827 command_runner.go:130] >     {
	I1217 20:29:40.728853  522827 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1217 20:29:40.728857  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728862  522827 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1217 20:29:40.728866  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728870  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728877  522827 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890",
	I1217 20:29:40.728887  522827 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"
	I1217 20:29:40.728890  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728894  522827 command_runner.go:130] >       "size":  "60850387",
	I1217 20:29:40.728898  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.728902  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.728904  522827 command_runner.go:130] >       },
	I1217 20:29:40.728913  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.728917  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728920  522827 command_runner.go:130] >     },
	I1217 20:29:40.728924  522827 command_runner.go:130] >     {
	I1217 20:29:40.728930  522827 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1217 20:29:40.728934  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728939  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1217 20:29:40.728943  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728946  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728954  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee",
	I1217 20:29:40.728962  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"
	I1217 20:29:40.728965  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728969  522827 command_runner.go:130] >       "size":  "85015535",
	I1217 20:29:40.728972  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.728976  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.728979  522827 command_runner.go:130] >       },
	I1217 20:29:40.728983  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.728986  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728996  522827 command_runner.go:130] >     },
	I1217 20:29:40.728999  522827 command_runner.go:130] >     {
	I1217 20:29:40.729006  522827 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1217 20:29:40.729009  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.729015  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1217 20:29:40.729018  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729022  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.729031  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f",
	I1217 20:29:40.729039  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1217 20:29:40.729042  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729046  522827 command_runner.go:130] >       "size":  "72170325",
	I1217 20:29:40.729049  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.729053  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.729056  522827 command_runner.go:130] >       },
	I1217 20:29:40.729060  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.729064  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.729067  522827 command_runner.go:130] >     },
	I1217 20:29:40.729070  522827 command_runner.go:130] >     {
	I1217 20:29:40.729076  522827 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1217 20:29:40.729081  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.729086  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1217 20:29:40.729089  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729093  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.729100  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f",
	I1217 20:29:40.729108  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1217 20:29:40.729111  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729115  522827 command_runner.go:130] >       "size":  "74107287",
	I1217 20:29:40.729119  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.729123  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.729125  522827 command_runner.go:130] >     },
	I1217 20:29:40.729128  522827 command_runner.go:130] >     {
	I1217 20:29:40.729135  522827 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1217 20:29:40.729138  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.729147  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1217 20:29:40.729150  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729154  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.729163  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3",
	I1217 20:29:40.729180  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"
	I1217 20:29:40.729183  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729187  522827 command_runner.go:130] >       "size":  "49822549",
	I1217 20:29:40.729191  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.729195  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.729198  522827 command_runner.go:130] >       },
	I1217 20:29:40.729202  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.729205  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.729208  522827 command_runner.go:130] >     },
	I1217 20:29:40.729212  522827 command_runner.go:130] >     {
	I1217 20:29:40.729218  522827 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 20:29:40.729221  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.729225  522827 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 20:29:40.729228  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729232  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.729239  522827 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1217 20:29:40.729246  522827 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1217 20:29:40.729249  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729253  522827 command_runner.go:130] >       "size":  "519884",
	I1217 20:29:40.729256  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.729260  522827 command_runner.go:130] >         "value":  "65535"
	I1217 20:29:40.729263  522827 command_runner.go:130] >       },
	I1217 20:29:40.729267  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.729271  522827 command_runner.go:130] >       "pinned":  true
	I1217 20:29:40.729274  522827 command_runner.go:130] >     }
	I1217 20:29:40.729276  522827 command_runner.go:130] >   ]
	I1217 20:29:40.729279  522827 command_runner.go:130] > }
	I1217 20:29:40.730532  522827 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:29:40.730563  522827 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:29:40.730572  522827 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1217 20:29:40.730679  522827 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-655452 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:29:40.730767  522827 ssh_runner.go:195] Run: crio config
	I1217 20:29:40.759067  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.758680307Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1217 20:29:40.759091  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.758877363Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1217 20:29:40.759355  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.759160664Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1217 20:29:40.759513  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.75929148Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1217 20:29:40.759764  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.759610703Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.760178  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.759978034Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1217 20:29:40.781892  522827 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1217 20:29:40.789853  522827 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1217 20:29:40.789886  522827 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1217 20:29:40.789894  522827 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1217 20:29:40.789897  522827 command_runner.go:130] > #
	I1217 20:29:40.789905  522827 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1217 20:29:40.789911  522827 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1217 20:29:40.789918  522827 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1217 20:29:40.789927  522827 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1217 20:29:40.789931  522827 command_runner.go:130] > # reload'.
	I1217 20:29:40.789938  522827 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1217 20:29:40.789949  522827 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1217 20:29:40.789959  522827 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1217 20:29:40.789965  522827 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1217 20:29:40.789972  522827 command_runner.go:130] > [crio]
	I1217 20:29:40.789978  522827 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1217 20:29:40.789983  522827 command_runner.go:130] > # containers images, in this directory.
	I1217 20:29:40.789993  522827 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1217 20:29:40.790003  522827 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1217 20:29:40.790008  522827 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1217 20:29:40.790017  522827 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1217 20:29:40.790024  522827 command_runner.go:130] > # imagestore = ""
	I1217 20:29:40.790038  522827 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1217 20:29:40.790048  522827 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1217 20:29:40.790053  522827 command_runner.go:130] > # storage_driver = "overlay"
	I1217 20:29:40.790058  522827 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1217 20:29:40.790065  522827 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1217 20:29:40.790069  522827 command_runner.go:130] > # storage_option = [
	I1217 20:29:40.790073  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790079  522827 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1217 20:29:40.790092  522827 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1217 20:29:40.790100  522827 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1217 20:29:40.790106  522827 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1217 20:29:40.790112  522827 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1217 20:29:40.790119  522827 command_runner.go:130] > # always happen on a node reboot
	I1217 20:29:40.790124  522827 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1217 20:29:40.790139  522827 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1217 20:29:40.790152  522827 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1217 20:29:40.790158  522827 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1217 20:29:40.790162  522827 command_runner.go:130] > # version_file_persist = ""
	I1217 20:29:40.790170  522827 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1217 20:29:40.790180  522827 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1217 20:29:40.790184  522827 command_runner.go:130] > # internal_wipe = true
	I1217 20:29:40.790193  522827 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1217 20:29:40.790202  522827 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1217 20:29:40.790206  522827 command_runner.go:130] > # internal_repair = true
	I1217 20:29:40.790211  522827 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1217 20:29:40.790219  522827 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1217 20:29:40.790226  522827 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1217 20:29:40.790232  522827 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1217 20:29:40.790241  522827 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1217 20:29:40.790251  522827 command_runner.go:130] > [crio.api]
	I1217 20:29:40.790257  522827 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1217 20:29:40.790262  522827 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1217 20:29:40.790271  522827 command_runner.go:130] > # IP address on which the stream server will listen.
	I1217 20:29:40.790278  522827 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1217 20:29:40.790285  522827 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1217 20:29:40.790290  522827 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1217 20:29:40.790297  522827 command_runner.go:130] > # stream_port = "0"
	I1217 20:29:40.790302  522827 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1217 20:29:40.790307  522827 command_runner.go:130] > # stream_enable_tls = false
	I1217 20:29:40.790313  522827 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1217 20:29:40.790320  522827 command_runner.go:130] > # stream_idle_timeout = ""
	I1217 20:29:40.790330  522827 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1217 20:29:40.790339  522827 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1217 20:29:40.790343  522827 command_runner.go:130] > # stream_tls_cert = ""
	I1217 20:29:40.790349  522827 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1217 20:29:40.790357  522827 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1217 20:29:40.790361  522827 command_runner.go:130] > # stream_tls_key = ""
	I1217 20:29:40.790367  522827 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1217 20:29:40.790377  522827 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1217 20:29:40.790382  522827 command_runner.go:130] > # automatically pick up the changes.
	I1217 20:29:40.790385  522827 command_runner.go:130] > # stream_tls_ca = ""
	I1217 20:29:40.790402  522827 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1217 20:29:40.790415  522827 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1217 20:29:40.790423  522827 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1217 20:29:40.790428  522827 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1217 20:29:40.790437  522827 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1217 20:29:40.790443  522827 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1217 20:29:40.790447  522827 command_runner.go:130] > [crio.runtime]
	I1217 20:29:40.790455  522827 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1217 20:29:40.790465  522827 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1217 20:29:40.790470  522827 command_runner.go:130] > # "nofile=1024:2048"
	I1217 20:29:40.790476  522827 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1217 20:29:40.790480  522827 command_runner.go:130] > # default_ulimits = [
	I1217 20:29:40.790486  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790493  522827 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1217 20:29:40.790499  522827 command_runner.go:130] > # no_pivot = false
	I1217 20:29:40.790505  522827 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1217 20:29:40.790511  522827 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1217 20:29:40.790518  522827 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1217 20:29:40.790525  522827 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1217 20:29:40.790530  522827 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1217 20:29:40.790539  522827 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1217 20:29:40.790543  522827 command_runner.go:130] > # conmon = ""
	I1217 20:29:40.790547  522827 command_runner.go:130] > # Cgroup setting for conmon
	I1217 20:29:40.790558  522827 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1217 20:29:40.790563  522827 command_runner.go:130] > conmon_cgroup = "pod"
	I1217 20:29:40.790572  522827 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1217 20:29:40.790585  522827 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1217 20:29:40.790592  522827 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1217 20:29:40.790603  522827 command_runner.go:130] > # conmon_env = [
	I1217 20:29:40.790606  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790611  522827 command_runner.go:130] > # Additional environment variables to set for all the
	I1217 20:29:40.790621  522827 command_runner.go:130] > # containers. These are overridden if set in the
	I1217 20:29:40.790627  522827 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1217 20:29:40.790631  522827 command_runner.go:130] > # default_env = [
	I1217 20:29:40.790634  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790639  522827 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1217 20:29:40.790647  522827 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1217 20:29:40.790653  522827 command_runner.go:130] > # selinux = false
	I1217 20:29:40.790660  522827 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1217 20:29:40.790675  522827 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1217 20:29:40.790682  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.790691  522827 command_runner.go:130] > # seccomp_profile = ""
	I1217 20:29:40.790698  522827 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1217 20:29:40.790703  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.790707  522827 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1217 20:29:40.790717  522827 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1217 20:29:40.790723  522827 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1217 20:29:40.790730  522827 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1217 20:29:40.790738  522827 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1217 20:29:40.790744  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.790751  522827 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1217 20:29:40.790757  522827 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1217 20:29:40.790761  522827 command_runner.go:130] > # the cgroup blockio controller.
	I1217 20:29:40.790765  522827 command_runner.go:130] > # blockio_config_file = ""
	I1217 20:29:40.790774  522827 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1217 20:29:40.790780  522827 command_runner.go:130] > # blockio parameters.
	I1217 20:29:40.790790  522827 command_runner.go:130] > # blockio_reload = false
	I1217 20:29:40.790796  522827 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1217 20:29:40.790800  522827 command_runner.go:130] > # irqbalance daemon.
	I1217 20:29:40.790805  522827 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1217 20:29:40.790814  522827 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1217 20:29:40.790828  522827 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1217 20:29:40.790836  522827 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1217 20:29:40.790845  522827 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1217 20:29:40.790852  522827 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1217 20:29:40.790859  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.790863  522827 command_runner.go:130] > # rdt_config_file = ""
	I1217 20:29:40.790869  522827 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1217 20:29:40.790873  522827 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1217 20:29:40.790881  522827 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1217 20:29:40.790885  522827 command_runner.go:130] > # separate_pull_cgroup = ""
	I1217 20:29:40.790892  522827 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1217 20:29:40.790900  522827 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1217 20:29:40.790904  522827 command_runner.go:130] > # will be added.
	I1217 20:29:40.790908  522827 command_runner.go:130] > # default_capabilities = [
	I1217 20:29:40.790920  522827 command_runner.go:130] > # 	"CHOWN",
	I1217 20:29:40.790924  522827 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1217 20:29:40.790927  522827 command_runner.go:130] > # 	"FSETID",
	I1217 20:29:40.790930  522827 command_runner.go:130] > # 	"FOWNER",
	I1217 20:29:40.790940  522827 command_runner.go:130] > # 	"SETGID",
	I1217 20:29:40.790944  522827 command_runner.go:130] > # 	"SETUID",
	I1217 20:29:40.790963  522827 command_runner.go:130] > # 	"SETPCAP",
	I1217 20:29:40.790971  522827 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1217 20:29:40.790975  522827 command_runner.go:130] > # 	"KILL",
	I1217 20:29:40.790977  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790985  522827 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1217 20:29:40.790992  522827 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1217 20:29:40.790999  522827 command_runner.go:130] > # add_inheritable_capabilities = false
	I1217 20:29:40.791005  522827 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1217 20:29:40.791018  522827 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1217 20:29:40.791023  522827 command_runner.go:130] > default_sysctls = [
	I1217 20:29:40.791030  522827 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1217 20:29:40.791033  522827 command_runner.go:130] > ]
	I1217 20:29:40.791038  522827 command_runner.go:130] > # List of devices on the host that a
	I1217 20:29:40.791044  522827 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1217 20:29:40.791048  522827 command_runner.go:130] > # allowed_devices = [
	I1217 20:29:40.791055  522827 command_runner.go:130] > # 	"/dev/fuse",
	I1217 20:29:40.791059  522827 command_runner.go:130] > # 	"/dev/net/tun",
	I1217 20:29:40.791062  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791067  522827 command_runner.go:130] > # List of additional devices. specified as
	I1217 20:29:40.791081  522827 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1217 20:29:40.791088  522827 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1217 20:29:40.791096  522827 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1217 20:29:40.791103  522827 command_runner.go:130] > # additional_devices = [
	I1217 20:29:40.791110  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791115  522827 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1217 20:29:40.791119  522827 command_runner.go:130] > # cdi_spec_dirs = [
	I1217 20:29:40.791122  522827 command_runner.go:130] > # 	"/etc/cdi",
	I1217 20:29:40.791126  522827 command_runner.go:130] > # 	"/var/run/cdi",
	I1217 20:29:40.791130  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791136  522827 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1217 20:29:40.791144  522827 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1217 20:29:40.791149  522827 command_runner.go:130] > # Defaults to false.
	I1217 20:29:40.791156  522827 command_runner.go:130] > # device_ownership_from_security_context = false
	I1217 20:29:40.791164  522827 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1217 20:29:40.791178  522827 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1217 20:29:40.791181  522827 command_runner.go:130] > # hooks_dir = [
	I1217 20:29:40.791186  522827 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1217 20:29:40.791189  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791195  522827 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1217 20:29:40.791205  522827 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1217 20:29:40.791210  522827 command_runner.go:130] > # its default mounts from the following two files:
	I1217 20:29:40.791220  522827 command_runner.go:130] > #
	I1217 20:29:40.791229  522827 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1217 20:29:40.791240  522827 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1217 20:29:40.791248  522827 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1217 20:29:40.791251  522827 command_runner.go:130] > #
	I1217 20:29:40.791257  522827 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1217 20:29:40.791274  522827 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1217 20:29:40.791280  522827 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1217 20:29:40.791285  522827 command_runner.go:130] > #      only add mounts it finds in this file.
	I1217 20:29:40.791288  522827 command_runner.go:130] > #
	I1217 20:29:40.791292  522827 command_runner.go:130] > # default_mounts_file = ""
	I1217 20:29:40.791301  522827 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1217 20:29:40.791316  522827 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1217 20:29:40.791320  522827 command_runner.go:130] > # pids_limit = -1
	I1217 20:29:40.791326  522827 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1217 20:29:40.791335  522827 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1217 20:29:40.791343  522827 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1217 20:29:40.791354  522827 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1217 20:29:40.791357  522827 command_runner.go:130] > # log_size_max = -1
	I1217 20:29:40.791364  522827 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1217 20:29:40.791368  522827 command_runner.go:130] > # log_to_journald = false
	I1217 20:29:40.791374  522827 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1217 20:29:40.791383  522827 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1217 20:29:40.791391  522827 command_runner.go:130] > # Path to directory for container attach sockets.
	I1217 20:29:40.791396  522827 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1217 20:29:40.791401  522827 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1217 20:29:40.791405  522827 command_runner.go:130] > # bind_mount_prefix = ""
	I1217 20:29:40.791417  522827 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1217 20:29:40.791421  522827 command_runner.go:130] > # read_only = false
	I1217 20:29:40.791427  522827 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1217 20:29:40.791437  522827 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1217 20:29:40.791441  522827 command_runner.go:130] > # live configuration reload.
	I1217 20:29:40.791445  522827 command_runner.go:130] > # log_level = "info"
	I1217 20:29:40.791454  522827 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1217 20:29:40.791460  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.791466  522827 command_runner.go:130] > # log_filter = ""
	I1217 20:29:40.791472  522827 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1217 20:29:40.791481  522827 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1217 20:29:40.791485  522827 command_runner.go:130] > # separated by comma.
	I1217 20:29:40.791493  522827 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 20:29:40.791497  522827 command_runner.go:130] > # uid_mappings = ""
	I1217 20:29:40.791506  522827 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1217 20:29:40.791518  522827 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1217 20:29:40.791523  522827 command_runner.go:130] > # separated by comma.
	I1217 20:29:40.791530  522827 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 20:29:40.791535  522827 command_runner.go:130] > # gid_mappings = ""
	I1217 20:29:40.791540  522827 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1217 20:29:40.791549  522827 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1217 20:29:40.791556  522827 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1217 20:29:40.791565  522827 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 20:29:40.791572  522827 command_runner.go:130] > # minimum_mappable_uid = -1
	I1217 20:29:40.791604  522827 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1217 20:29:40.791611  522827 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1217 20:29:40.791617  522827 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1217 20:29:40.791627  522827 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 20:29:40.791634  522827 command_runner.go:130] > # minimum_mappable_gid = -1
	I1217 20:29:40.791640  522827 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1217 20:29:40.791648  522827 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1217 20:29:40.791662  522827 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1217 20:29:40.791666  522827 command_runner.go:130] > # ctr_stop_timeout = 30
	I1217 20:29:40.791672  522827 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1217 20:29:40.791680  522827 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1217 20:29:40.791685  522827 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1217 20:29:40.791690  522827 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1217 20:29:40.791694  522827 command_runner.go:130] > # drop_infra_ctr = true
	I1217 20:29:40.791700  522827 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1217 20:29:40.791712  522827 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1217 20:29:40.791723  522827 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1217 20:29:40.791727  522827 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1217 20:29:40.791734  522827 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1217 20:29:40.791743  522827 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1217 20:29:40.791749  522827 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1217 20:29:40.791756  522827 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1217 20:29:40.791760  522827 command_runner.go:130] > # shared_cpuset = ""
	I1217 20:29:40.791766  522827 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1217 20:29:40.791773  522827 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1217 20:29:40.791777  522827 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1217 20:29:40.791784  522827 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1217 20:29:40.791795  522827 command_runner.go:130] > # pinns_path = ""
	I1217 20:29:40.791801  522827 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1217 20:29:40.791807  522827 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1217 20:29:40.791814  522827 command_runner.go:130] > # enable_criu_support = true
	I1217 20:29:40.791819  522827 command_runner.go:130] > # Enable/disable the generation of the container,
	I1217 20:29:40.791826  522827 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1217 20:29:40.791833  522827 command_runner.go:130] > # enable_pod_events = false
	I1217 20:29:40.791839  522827 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1217 20:29:40.791845  522827 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1217 20:29:40.791849  522827 command_runner.go:130] > # default_runtime = "crun"
	I1217 20:29:40.791857  522827 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1217 20:29:40.791865  522827 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1217 20:29:40.791874  522827 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1217 20:29:40.791887  522827 command_runner.go:130] > # creation as a file is not desired either.
	I1217 20:29:40.791896  522827 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1217 20:29:40.791903  522827 command_runner.go:130] > # the hostname is being managed dynamically.
	I1217 20:29:40.791910  522827 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1217 20:29:40.791914  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791920  522827 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1217 20:29:40.791929  522827 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1217 20:29:40.791935  522827 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1217 20:29:40.791943  522827 command_runner.go:130] > # Each entry in the table should follow the format:
	I1217 20:29:40.791946  522827 command_runner.go:130] > #
	I1217 20:29:40.791951  522827 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1217 20:29:40.791958  522827 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1217 20:29:40.791964  522827 command_runner.go:130] > # runtime_type = "oci"
	I1217 20:29:40.791969  522827 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1217 20:29:40.791976  522827 command_runner.go:130] > # inherit_default_runtime = false
	I1217 20:29:40.791981  522827 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1217 20:29:40.791986  522827 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1217 20:29:40.791990  522827 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1217 20:29:40.791996  522827 command_runner.go:130] > # monitor_env = []
	I1217 20:29:40.792001  522827 command_runner.go:130] > # privileged_without_host_devices = false
	I1217 20:29:40.792008  522827 command_runner.go:130] > # allowed_annotations = []
	I1217 20:29:40.792014  522827 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1217 20:29:40.792017  522827 command_runner.go:130] > # no_sync_log = false
	I1217 20:29:40.792021  522827 command_runner.go:130] > # default_annotations = {}
	I1217 20:29:40.792028  522827 command_runner.go:130] > # stream_websockets = false
	I1217 20:29:40.792034  522827 command_runner.go:130] > # seccomp_profile = ""
	I1217 20:29:40.792066  522827 command_runner.go:130] > # Where:
	I1217 20:29:40.792076  522827 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1217 20:29:40.792083  522827 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1217 20:29:40.792090  522827 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1217 20:29:40.792098  522827 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1217 20:29:40.792102  522827 command_runner.go:130] > #   in $PATH.
	I1217 20:29:40.792108  522827 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1217 20:29:40.792113  522827 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1217 20:29:40.792122  522827 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1217 20:29:40.792128  522827 command_runner.go:130] > #   state.
	I1217 20:29:40.792134  522827 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1217 20:29:40.792143  522827 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1217 20:29:40.792149  522827 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1217 20:29:40.792155  522827 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1217 20:29:40.792163  522827 command_runner.go:130] > #   the values from the default runtime on load time.
	I1217 20:29:40.792174  522827 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1217 20:29:40.792183  522827 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1217 20:29:40.792190  522827 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1217 20:29:40.792199  522827 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1217 20:29:40.792207  522827 command_runner.go:130] > #   The currently recognized values are:
	I1217 20:29:40.792214  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1217 20:29:40.792222  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1217 20:29:40.792231  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1217 20:29:40.792237  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1217 20:29:40.792251  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1217 20:29:40.792260  522827 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1217 20:29:40.792270  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1217 20:29:40.792277  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1217 20:29:40.792284  522827 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1217 20:29:40.792293  522827 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1217 20:29:40.792309  522827 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1217 20:29:40.792316  522827 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1217 20:29:40.792322  522827 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1217 20:29:40.792331  522827 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1217 20:29:40.792337  522827 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1217 20:29:40.792345  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1217 20:29:40.792353  522827 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1217 20:29:40.792358  522827 command_runner.go:130] > #   deprecated option "conmon".
	I1217 20:29:40.792367  522827 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1217 20:29:40.792380  522827 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1217 20:29:40.792387  522827 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1217 20:29:40.792392  522827 command_runner.go:130] > #   should be moved to the container's cgroup
	I1217 20:29:40.792405  522827 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1217 20:29:40.792410  522827 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1217 20:29:40.792420  522827 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1217 20:29:40.792424  522827 command_runner.go:130] > #   conmon-rs by using:
	I1217 20:29:40.792432  522827 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1217 20:29:40.792441  522827 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1217 20:29:40.792454  522827 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1217 20:29:40.792465  522827 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1217 20:29:40.792471  522827 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1217 20:29:40.792485  522827 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1217 20:29:40.792497  522827 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1217 20:29:40.792506  522827 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1217 20:29:40.792515  522827 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1217 20:29:40.792524  522827 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1217 20:29:40.792529  522827 command_runner.go:130] > #   when a machine crash happens.
	I1217 20:29:40.792536  522827 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1217 20:29:40.792546  522827 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1217 20:29:40.792558  522827 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1217 20:29:40.792562  522827 command_runner.go:130] > #   seccomp profile for the runtime.
	I1217 20:29:40.792568  522827 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1217 20:29:40.792579  522827 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1217 20:29:40.792582  522827 command_runner.go:130] > #
	I1217 20:29:40.792587  522827 command_runner.go:130] > # Using the seccomp notifier feature:
	I1217 20:29:40.792590  522827 command_runner.go:130] > #
	I1217 20:29:40.792596  522827 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1217 20:29:40.792605  522827 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1217 20:29:40.792608  522827 command_runner.go:130] > #
	I1217 20:29:40.792615  522827 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1217 20:29:40.792630  522827 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1217 20:29:40.792633  522827 command_runner.go:130] > #
	I1217 20:29:40.792642  522827 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1217 20:29:40.792649  522827 command_runner.go:130] > # feature.
	I1217 20:29:40.792652  522827 command_runner.go:130] > #
	I1217 20:29:40.792658  522827 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1217 20:29:40.792667  522827 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1217 20:29:40.792673  522827 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1217 20:29:40.792679  522827 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1217 20:29:40.792688  522827 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1217 20:29:40.792692  522827 command_runner.go:130] > #
	I1217 20:29:40.792702  522827 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1217 20:29:40.792711  522827 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1217 20:29:40.792715  522827 command_runner.go:130] > #
	I1217 20:29:40.792721  522827 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1217 20:29:40.792727  522827 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1217 20:29:40.792732  522827 command_runner.go:130] > #
	I1217 20:29:40.792738  522827 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1217 20:29:40.792744  522827 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1217 20:29:40.792750  522827 command_runner.go:130] > # limitation.
	I1217 20:29:40.792754  522827 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1217 20:29:40.792758  522827 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1217 20:29:40.792761  522827 command_runner.go:130] > runtime_type = ""
	I1217 20:29:40.792765  522827 command_runner.go:130] > runtime_root = "/run/crun"
	I1217 20:29:40.792769  522827 command_runner.go:130] > inherit_default_runtime = false
	I1217 20:29:40.792774  522827 command_runner.go:130] > runtime_config_path = ""
	I1217 20:29:40.792781  522827 command_runner.go:130] > container_min_memory = ""
	I1217 20:29:40.792785  522827 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1217 20:29:40.792796  522827 command_runner.go:130] > monitor_cgroup = "pod"
	I1217 20:29:40.792801  522827 command_runner.go:130] > monitor_exec_cgroup = ""
	I1217 20:29:40.792804  522827 command_runner.go:130] > allowed_annotations = [
	I1217 20:29:40.792809  522827 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1217 20:29:40.792814  522827 command_runner.go:130] > ]
	I1217 20:29:40.792819  522827 command_runner.go:130] > privileged_without_host_devices = false
	I1217 20:29:40.792823  522827 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1217 20:29:40.792828  522827 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1217 20:29:40.792834  522827 command_runner.go:130] > runtime_type = ""
	I1217 20:29:40.792839  522827 command_runner.go:130] > runtime_root = "/run/runc"
	I1217 20:29:40.792842  522827 command_runner.go:130] > inherit_default_runtime = false
	I1217 20:29:40.792846  522827 command_runner.go:130] > runtime_config_path = ""
	I1217 20:29:40.792850  522827 command_runner.go:130] > container_min_memory = ""
	I1217 20:29:40.792856  522827 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1217 20:29:40.792860  522827 command_runner.go:130] > monitor_cgroup = "pod"
	I1217 20:29:40.792864  522827 command_runner.go:130] > monitor_exec_cgroup = ""
	I1217 20:29:40.792875  522827 command_runner.go:130] > privileged_without_host_devices = false
	I1217 20:29:40.792884  522827 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1217 20:29:40.792890  522827 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1217 20:29:40.792896  522827 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1217 20:29:40.792907  522827 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1217 20:29:40.792918  522827 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1217 20:29:40.792930  522827 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1217 20:29:40.792940  522827 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1217 20:29:40.792947  522827 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1217 20:29:40.792958  522827 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1217 20:29:40.792975  522827 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1217 20:29:40.792980  522827 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1217 20:29:40.792998  522827 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1217 20:29:40.793004  522827 command_runner.go:130] > # Example:
	I1217 20:29:40.793009  522827 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1217 20:29:40.793014  522827 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1217 20:29:40.793019  522827 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1217 20:29:40.793025  522827 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1217 20:29:40.793029  522827 command_runner.go:130] > # cpuset = "0-1"
	I1217 20:29:40.793033  522827 command_runner.go:130] > # cpushares = "5"
	I1217 20:29:40.793039  522827 command_runner.go:130] > # cpuquota = "1000"
	I1217 20:29:40.793043  522827 command_runner.go:130] > # cpuperiod = "100000"
	I1217 20:29:40.793050  522827 command_runner.go:130] > # cpulimit = "35"
	I1217 20:29:40.793059  522827 command_runner.go:130] > # Where:
	I1217 20:29:40.793066  522827 command_runner.go:130] > # The workload name is workload-type.
	I1217 20:29:40.793073  522827 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1217 20:29:40.793079  522827 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1217 20:29:40.793087  522827 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1217 20:29:40.793096  522827 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1217 20:29:40.793101  522827 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1217 20:29:40.793106  522827 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1217 20:29:40.793116  522827 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1217 20:29:40.793122  522827 command_runner.go:130] > # Default value is set to true
	I1217 20:29:40.793132  522827 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1217 20:29:40.793141  522827 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1217 20:29:40.793146  522827 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1217 20:29:40.793150  522827 command_runner.go:130] > # Default value is set to 'false'
	I1217 20:29:40.793155  522827 command_runner.go:130] > # disable_hostport_mapping = false
	I1217 20:29:40.793163  522827 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1217 20:29:40.793172  522827 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1217 20:29:40.793175  522827 command_runner.go:130] > # timezone = ""
	I1217 20:29:40.793185  522827 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1217 20:29:40.793188  522827 command_runner.go:130] > #
	I1217 20:29:40.793194  522827 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1217 20:29:40.793212  522827 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1217 20:29:40.793215  522827 command_runner.go:130] > [crio.image]
	I1217 20:29:40.793222  522827 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1217 20:29:40.793229  522827 command_runner.go:130] > # default_transport = "docker://"
	I1217 20:29:40.793236  522827 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1217 20:29:40.793243  522827 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1217 20:29:40.793249  522827 command_runner.go:130] > # global_auth_file = ""
	I1217 20:29:40.793255  522827 command_runner.go:130] > # The image used to instantiate infra containers.
	I1217 20:29:40.793260  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.793264  522827 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1217 20:29:40.793271  522827 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1217 20:29:40.793277  522827 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1217 20:29:40.793283  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.793289  522827 command_runner.go:130] > # pause_image_auth_file = ""
	I1217 20:29:40.793295  522827 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1217 20:29:40.793304  522827 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1217 20:29:40.793311  522827 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1217 20:29:40.793317  522827 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1217 20:29:40.793323  522827 command_runner.go:130] > # pause_command = "/pause"
	I1217 20:29:40.793329  522827 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1217 20:29:40.793335  522827 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1217 20:29:40.793342  522827 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1217 20:29:40.793351  522827 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1217 20:29:40.793357  522827 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1217 20:29:40.793372  522827 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1217 20:29:40.793376  522827 command_runner.go:130] > # pinned_images = [
	I1217 20:29:40.793379  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793388  522827 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1217 20:29:40.793401  522827 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1217 20:29:40.793408  522827 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1217 20:29:40.793416  522827 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1217 20:29:40.793422  522827 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1217 20:29:40.793426  522827 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1217 20:29:40.793432  522827 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1217 20:29:40.793439  522827 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1217 20:29:40.793445  522827 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1217 20:29:40.793456  522827 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1217 20:29:40.793462  522827 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1217 20:29:40.793467  522827 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1217 20:29:40.793473  522827 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1217 20:29:40.793479  522827 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1217 20:29:40.793483  522827 command_runner.go:130] > # changing them here.
	I1217 20:29:40.793488  522827 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1217 20:29:40.793492  522827 command_runner.go:130] > # insecure_registries = [
	I1217 20:29:40.793495  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793514  522827 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1217 20:29:40.793522  522827 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1217 20:29:40.793526  522827 command_runner.go:130] > # image_volumes = "mkdir"
	I1217 20:29:40.793532  522827 command_runner.go:130] > # Temporary directory to use for storing big files
	I1217 20:29:40.793538  522827 command_runner.go:130] > # big_files_temporary_dir = ""
	I1217 20:29:40.793544  522827 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1217 20:29:40.793554  522827 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1217 20:29:40.793558  522827 command_runner.go:130] > # auto_reload_registries = false
	I1217 20:29:40.793564  522827 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1217 20:29:40.793572  522827 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1217 20:29:40.793584  522827 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1217 20:29:40.793589  522827 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1217 20:29:40.793594  522827 command_runner.go:130] > # The mode of short name resolution.
	I1217 20:29:40.793600  522827 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1217 20:29:40.793607  522827 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1217 20:29:40.793613  522827 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1217 20:29:40.793624  522827 command_runner.go:130] > # short_name_mode = "enforcing"
	I1217 20:29:40.793631  522827 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1217 20:29:40.793636  522827 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1217 20:29:40.793643  522827 command_runner.go:130] > # oci_artifact_mount_support = true
	I1217 20:29:40.793649  522827 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1217 20:29:40.793653  522827 command_runner.go:130] > # CNI plugins.
	I1217 20:29:40.793662  522827 command_runner.go:130] > [crio.network]
	I1217 20:29:40.793669  522827 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1217 20:29:40.793674  522827 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1217 20:29:40.793678  522827 command_runner.go:130] > # cni_default_network = ""
	I1217 20:29:40.793683  522827 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1217 20:29:40.793688  522827 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1217 20:29:40.793695  522827 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1217 20:29:40.793701  522827 command_runner.go:130] > # plugin_dirs = [
	I1217 20:29:40.793705  522827 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1217 20:29:40.793708  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793712  522827 command_runner.go:130] > # List of included pod metrics.
	I1217 20:29:40.793716  522827 command_runner.go:130] > # included_pod_metrics = [
	I1217 20:29:40.793721  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793727  522827 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1217 20:29:40.793733  522827 command_runner.go:130] > [crio.metrics]
	I1217 20:29:40.793738  522827 command_runner.go:130] > # Globally enable or disable metrics support.
	I1217 20:29:40.793742  522827 command_runner.go:130] > # enable_metrics = false
	I1217 20:29:40.793749  522827 command_runner.go:130] > # Specify enabled metrics collectors.
	I1217 20:29:40.793754  522827 command_runner.go:130] > # Per default all metrics are enabled.
	I1217 20:29:40.793760  522827 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1217 20:29:40.793769  522827 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1217 20:29:40.793781  522827 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1217 20:29:40.793788  522827 command_runner.go:130] > # metrics_collectors = [
	I1217 20:29:40.793792  522827 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1217 20:29:40.793796  522827 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1217 20:29:40.793801  522827 command_runner.go:130] > # 	"containers_oom_total",
	I1217 20:29:40.793810  522827 command_runner.go:130] > # 	"processes_defunct",
	I1217 20:29:40.793814  522827 command_runner.go:130] > # 	"operations_total",
	I1217 20:29:40.793818  522827 command_runner.go:130] > # 	"operations_latency_seconds",
	I1217 20:29:40.793825  522827 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1217 20:29:40.793830  522827 command_runner.go:130] > # 	"operations_errors_total",
	I1217 20:29:40.793834  522827 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1217 20:29:40.793838  522827 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1217 20:29:40.793843  522827 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1217 20:29:40.793847  522827 command_runner.go:130] > # 	"image_pulls_success_total",
	I1217 20:29:40.793851  522827 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1217 20:29:40.793857  522827 command_runner.go:130] > # 	"containers_oom_count_total",
	I1217 20:29:40.793862  522827 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1217 20:29:40.793869  522827 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1217 20:29:40.793873  522827 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1217 20:29:40.793876  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793882  522827 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1217 20:29:40.793888  522827 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1217 20:29:40.793894  522827 command_runner.go:130] > # The port on which the metrics server will listen.
	I1217 20:29:40.793898  522827 command_runner.go:130] > # metrics_port = 9090
	I1217 20:29:40.793905  522827 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1217 20:29:40.793909  522827 command_runner.go:130] > # metrics_socket = ""
	I1217 20:29:40.793920  522827 command_runner.go:130] > # The certificate for the secure metrics server.
	I1217 20:29:40.793926  522827 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1217 20:29:40.793932  522827 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1217 20:29:40.793939  522827 command_runner.go:130] > # certificate on any modification event.
	I1217 20:29:40.793942  522827 command_runner.go:130] > # metrics_cert = ""
	I1217 20:29:40.793947  522827 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1217 20:29:40.793959  522827 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1217 20:29:40.793967  522827 command_runner.go:130] > # metrics_key = ""
	I1217 20:29:40.793980  522827 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1217 20:29:40.793983  522827 command_runner.go:130] > [crio.tracing]
	I1217 20:29:40.793989  522827 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1217 20:29:40.793996  522827 command_runner.go:130] > # enable_tracing = false
	I1217 20:29:40.794002  522827 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1217 20:29:40.794006  522827 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1217 20:29:40.794015  522827 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1217 20:29:40.794020  522827 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1217 20:29:40.794024  522827 command_runner.go:130] > # CRI-O NRI configuration.
	I1217 20:29:40.794027  522827 command_runner.go:130] > [crio.nri]
	I1217 20:29:40.794031  522827 command_runner.go:130] > # Globally enable or disable NRI.
	I1217 20:29:40.794035  522827 command_runner.go:130] > # enable_nri = true
	I1217 20:29:40.794039  522827 command_runner.go:130] > # NRI socket to listen on.
	I1217 20:29:40.794045  522827 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1217 20:29:40.794050  522827 command_runner.go:130] > # NRI plugin directory to use.
	I1217 20:29:40.794061  522827 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1217 20:29:40.794066  522827 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1217 20:29:40.794073  522827 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1217 20:29:40.794082  522827 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1217 20:29:40.794150  522827 command_runner.go:130] > # nri_disable_connections = false
	I1217 20:29:40.794172  522827 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1217 20:29:40.794178  522827 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1217 20:29:40.794186  522827 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1217 20:29:40.794191  522827 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1217 20:29:40.794200  522827 command_runner.go:130] > # NRI default validator configuration.
	I1217 20:29:40.794211  522827 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1217 20:29:40.794218  522827 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1217 20:29:40.794225  522827 command_runner.go:130] > # can be restricted/rejected:
	I1217 20:29:40.794229  522827 command_runner.go:130] > # - OCI hook injection
	I1217 20:29:40.794235  522827 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1217 20:29:40.794240  522827 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1217 20:29:40.794245  522827 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1217 20:29:40.794252  522827 command_runner.go:130] > # - adjustment of linux namespaces
	I1217 20:29:40.794263  522827 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1217 20:29:40.794277  522827 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1217 20:29:40.794284  522827 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1217 20:29:40.794295  522827 command_runner.go:130] > #
	I1217 20:29:40.794299  522827 command_runner.go:130] > # [crio.nri.default_validator]
	I1217 20:29:40.794304  522827 command_runner.go:130] > # nri_enable_default_validator = false
	I1217 20:29:40.794312  522827 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1217 20:29:40.794318  522827 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1217 20:29:40.794326  522827 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1217 20:29:40.794338  522827 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1217 20:29:40.794343  522827 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1217 20:29:40.794347  522827 command_runner.go:130] > # nri_validator_required_plugins = [
	I1217 20:29:40.794352  522827 command_runner.go:130] > # ]
	I1217 20:29:40.794359  522827 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1217 20:29:40.794368  522827 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1217 20:29:40.794373  522827 command_runner.go:130] > [crio.stats]
	I1217 20:29:40.794386  522827 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1217 20:29:40.794392  522827 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1217 20:29:40.794398  522827 command_runner.go:130] > # stats_collection_period = 0
	I1217 20:29:40.794405  522827 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1217 20:29:40.794411  522827 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1217 20:29:40.794417  522827 command_runner.go:130] > # collection_period = 0
	I1217 20:29:40.794552  522827 cni.go:84] Creating CNI manager for ""
	I1217 20:29:40.794571  522827 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:29:40.794583  522827 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:29:40.794609  522827 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-655452 NodeName:functional-655452 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:29:40.794745  522827 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-655452"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:29:40.794827  522827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 20:29:40.802768  522827 command_runner.go:130] > kubeadm
	I1217 20:29:40.802789  522827 command_runner.go:130] > kubectl
	I1217 20:29:40.802794  522827 command_runner.go:130] > kubelet
	I1217 20:29:40.802809  522827 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:29:40.802895  522827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:29:40.810641  522827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 20:29:40.826893  522827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 20:29:40.841576  522827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1217 20:29:40.856014  522827 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:29:40.859640  522827 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 20:29:40.860204  522827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:29:40.970449  522827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:29:41.821239  522827 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452 for IP: 192.168.49.2
	I1217 20:29:41.821266  522827 certs.go:195] generating shared ca certs ...
	I1217 20:29:41.821284  522827 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:29:41.821441  522827 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:29:41.821492  522827 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:29:41.821509  522827 certs.go:257] generating profile certs ...
	I1217 20:29:41.821619  522827 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key
	I1217 20:29:41.821682  522827 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key.aa95dda5
	I1217 20:29:41.821733  522827 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key
	I1217 20:29:41.821747  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 20:29:41.821765  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 20:29:41.821780  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 20:29:41.821791  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 20:29:41.821805  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 20:29:41.821817  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 20:29:41.821831  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 20:29:41.821846  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 20:29:41.821894  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:29:41.821945  522827 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:29:41.821959  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:29:41.821996  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:29:41.822031  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:29:41.822058  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:29:41.822104  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:29:41.822138  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:41.822159  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:29:41.822175  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:29:41.822802  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:29:41.845035  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:29:41.868336  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:29:41.901049  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:29:41.918871  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 20:29:41.937168  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 20:29:41.954450  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:29:41.971684  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:29:41.988884  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:29:42.008645  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:29:42.029398  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:29:42.047332  522827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:29:42.061588  522827 ssh_runner.go:195] Run: openssl version
	I1217 20:29:42.068928  522827 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 20:29:42.069476  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.078814  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:29:42.088990  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.093920  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.093987  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.094097  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.137804  522827 command_runner.go:130] > 51391683
	I1217 20:29:42.138358  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:29:42.147537  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.157061  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:29:42.166751  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.171759  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.171865  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.172010  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.222515  522827 command_runner.go:130] > 3ec20f2e
	I1217 20:29:42.222600  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:29:42.231935  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.242232  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:29:42.250913  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.255543  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.255609  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.255686  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.298361  522827 command_runner.go:130] > b5213941
	I1217 20:29:42.298457  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:29:42.307141  522827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:29:42.311232  522827 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:29:42.311338  522827 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 20:29:42.311364  522827 command_runner.go:130] > Device: 259,1	Inode: 1313050     Links: 1
	I1217 20:29:42.311390  522827 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 20:29:42.311425  522827 command_runner.go:130] > Access: 2025-12-17 20:25:34.088053460 +0000
	I1217 20:29:42.311446  522827 command_runner.go:130] > Modify: 2025-12-17 20:21:29.777917427 +0000
	I1217 20:29:42.311461  522827 command_runner.go:130] > Change: 2025-12-17 20:21:29.777917427 +0000
	I1217 20:29:42.311467  522827 command_runner.go:130] >  Birth: 2025-12-17 20:21:29.777917427 +0000
	I1217 20:29:42.311555  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:29:42.352885  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.353302  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:29:42.407045  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.407143  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:29:42.455863  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.456326  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:29:42.505636  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.506227  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:29:42.548331  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.548862  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:29:42.590705  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.591277  522827 kubeadm.go:401] StartCluster: {Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:29:42.591354  522827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:29:42.591425  522827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:29:42.618986  522827 cri.go:89] found id: ""
	I1217 20:29:42.619059  522827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:29:42.626323  522827 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 20:29:42.626347  522827 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 20:29:42.626355  522827 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 20:29:42.627403  522827 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 20:29:42.627425  522827 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 20:29:42.627476  522827 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 20:29:42.635033  522827 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:29:42.635439  522827 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-655452" does not appear in /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:42.635552  522827 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-485134/kubeconfig needs updating (will repair): [kubeconfig missing "functional-655452" cluster setting kubeconfig missing "functional-655452" context setting]
	I1217 20:29:42.635844  522827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/kubeconfig: {Name:mkc7214551fa855d6d4575d627c3ecd54291b351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:29:42.636278  522827 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:42.636437  522827 kapi.go:59] client config for functional-655452: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:29:42.636955  522827 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 20:29:42.636974  522827 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 20:29:42.636979  522827 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 20:29:42.636984  522827 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 20:29:42.636988  522827 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 20:29:42.637054  522827 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 20:29:42.637345  522827 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 20:29:42.646583  522827 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 20:29:42.646685  522827 kubeadm.go:602] duration metric: took 19.253149ms to restartPrimaryControlPlane
	I1217 20:29:42.646744  522827 kubeadm.go:403] duration metric: took 55.459532ms to StartCluster
	I1217 20:29:42.646789  522827 settings.go:142] acquiring lock: {Name:mk91fac3a58d91b836cd0701183ef7cc1e571672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:29:42.646894  522827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:42.647795  522827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/kubeconfig: {Name:mkc7214551fa855d6d4575d627c3ecd54291b351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:29:42.648137  522827 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:29:42.648371  522827 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:29:42.648423  522827 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:29:42.648485  522827 addons.go:70] Setting storage-provisioner=true in profile "functional-655452"
	I1217 20:29:42.648497  522827 addons.go:239] Setting addon storage-provisioner=true in "functional-655452"
	I1217 20:29:42.648521  522827 host.go:66] Checking if "functional-655452" exists ...
	I1217 20:29:42.648902  522827 addons.go:70] Setting default-storageclass=true in profile "functional-655452"
	I1217 20:29:42.648999  522827 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-655452"
	I1217 20:29:42.649042  522827 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:29:42.649424  522827 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:29:42.653921  522827 out.go:179] * Verifying Kubernetes components...
	I1217 20:29:42.656821  522827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:29:42.689834  522827 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:42.690004  522827 kapi.go:59] client config for functional-655452: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:29:42.690276  522827 addons.go:239] Setting addon default-storageclass=true in "functional-655452"
	I1217 20:29:42.690305  522827 host.go:66] Checking if "functional-655452" exists ...
	I1217 20:29:42.690860  522827 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:29:42.692598  522827 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 20:29:42.699772  522827 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:42.699803  522827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 20:29:42.699871  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:42.735975  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:42.743517  522827 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:42.743543  522827 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:29:42.743664  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:42.778325  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:42.848025  522827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:29:42.860324  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:42.899199  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:43.321927  522827 node_ready.go:35] waiting up to 6m0s for node "functional-655452" to be "Ready" ...
	I1217 20:29:43.322118  522827 type.go:168] "Request Body" body=""
	I1217 20:29:43.322203  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:43.322465  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.322528  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.322567  522827 retry.go:31] will retry after 172.422642ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.322648  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.322689  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.322715  522827 retry.go:31] will retry after 167.097093ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.322809  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:43.490380  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:43.496229  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:43.581353  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.581433  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.581460  522827 retry.go:31] will retry after 331.036154ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.581553  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.581605  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.581639  522827 retry.go:31] will retry after 400.38477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.822877  522827 type.go:168] "Request Body" body=""
	I1217 20:29:43.822949  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:43.823300  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:43.912722  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:43.970874  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.974629  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.974708  522827 retry.go:31] will retry after 462.319516ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.982922  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:44.044566  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:44.048683  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.048723  522827 retry.go:31] will retry after 443.115947ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.323122  522827 type.go:168] "Request Body" body=""
	I1217 20:29:44.323200  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:44.323555  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:44.437879  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:44.492501  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:44.499443  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:44.499482  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.499520  522827 retry.go:31] will retry after 1.265386144s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.551004  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:44.551045  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.551085  522827 retry.go:31] will retry after 774.139673ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.822655  522827 type.go:168] "Request Body" body=""
	I1217 20:29:44.822811  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:44.823195  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:45.323027  522827 type.go:168] "Request Body" body=""
	I1217 20:29:45.323135  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:45.323507  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:45.323621  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:45.325715  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:45.391952  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:45.395668  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:45.395750  522827 retry.go:31] will retry after 1.529541916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:45.765134  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:45.822845  522827 type.go:168] "Request Body" body=""
	I1217 20:29:45.822973  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:45.823280  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:45.823537  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:45.827173  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:45.827206  522827 retry.go:31] will retry after 637.037829ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:46.322836  522827 type.go:168] "Request Body" body=""
	I1217 20:29:46.322927  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:46.323203  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:46.464492  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:46.525009  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:46.525062  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:46.525083  522827 retry.go:31] will retry after 1.110973738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:46.822245  522827 type.go:168] "Request Body" body=""
	I1217 20:29:46.822323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:46.822662  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:46.926099  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:46.987960  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:46.988006  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:46.988028  522827 retry.go:31] will retry after 1.385710629s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:47.322640  522827 type.go:168] "Request Body" body=""
	I1217 20:29:47.322715  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:47.323041  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:47.636709  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:47.697205  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:47.697243  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:47.697264  522827 retry.go:31] will retry after 4.090194732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:47.822497  522827 type.go:168] "Request Body" body=""
	I1217 20:29:47.822589  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:47.822932  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:47.822989  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:48.322659  522827 type.go:168] "Request Body" body=""
	I1217 20:29:48.322736  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:48.323019  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:48.374352  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:48.431979  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:48.435409  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:48.435442  522827 retry.go:31] will retry after 3.099398493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:48.823142  522827 type.go:168] "Request Body" body=""
	I1217 20:29:48.823220  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:48.823522  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:49.322226  522827 type.go:168] "Request Body" body=""
	I1217 20:29:49.322316  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:49.322645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:49.822280  522827 type.go:168] "Request Body" body=""
	I1217 20:29:49.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:49.822709  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:50.322247  522827 type.go:168] "Request Body" body=""
	I1217 20:29:50.322328  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:50.322668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:50.322721  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:50.822373  522827 type.go:168] "Request Body" body=""
	I1217 20:29:50.822449  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:50.822719  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:51.322273  522827 type.go:168] "Request Body" body=""
	I1217 20:29:51.322361  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:51.322682  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:51.535119  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:51.608419  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:51.608461  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:51.608504  522827 retry.go:31] will retry after 5.948755722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:51.787984  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:51.822458  522827 type.go:168] "Request Body" body=""
	I1217 20:29:51.822538  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:51.822817  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:51.846041  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:51.846085  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:51.846105  522827 retry.go:31] will retry after 5.856724643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:52.322893  522827 type.go:168] "Request Body" body=""
	I1217 20:29:52.322982  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:52.323271  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:52.323320  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:52.822254  522827 type.go:168] "Request Body" body=""
	I1217 20:29:52.822329  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:52.822653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:53.322391  522827 type.go:168] "Request Body" body=""
	I1217 20:29:53.322479  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:53.322825  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:53.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:29:53.822273  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:53.822595  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:54.322265  522827 type.go:168] "Request Body" body=""
	I1217 20:29:54.322351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:54.322683  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:54.822243  522827 type.go:168] "Request Body" body=""
	I1217 20:29:54.822322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:54.822646  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:54.822705  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:55.322383  522827 type.go:168] "Request Body" body=""
	I1217 20:29:55.322466  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:55.322739  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:55.822262  522827 type.go:168] "Request Body" body=""
	I1217 20:29:55.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:55.822680  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:56.322404  522827 type.go:168] "Request Body" body=""
	I1217 20:29:56.322493  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:56.322874  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:56.822564  522827 type.go:168] "Request Body" body=""
	I1217 20:29:56.822678  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:56.823046  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:56.823109  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:57.322771  522827 type.go:168] "Request Body" body=""
	I1217 20:29:57.322846  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:57.323141  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:57.557506  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:57.638482  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:57.642516  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:57.642548  522827 retry.go:31] will retry after 4.405911356s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:57.703796  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:57.764881  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:57.764928  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:57.764950  522827 retry.go:31] will retry after 7.580168113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:57.823128  522827 type.go:168] "Request Body" body=""
	I1217 20:29:57.823235  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:57.823556  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:58.322216  522827 type.go:168] "Request Body" body=""
	I1217 20:29:58.322291  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:58.322579  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:58.822288  522827 type.go:168] "Request Body" body=""
	I1217 20:29:58.822434  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:58.822838  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:59.322555  522827 type.go:168] "Request Body" body=""
	I1217 20:29:59.322632  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:59.322948  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:59.323004  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:59.822770  522827 type.go:168] "Request Body" body=""
	I1217 20:29:59.822844  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:59.823119  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:00.323032  522827 type.go:168] "Request Body" body=""
	I1217 20:30:00.323116  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:00.323489  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:00.822257  522827 type.go:168] "Request Body" body=""
	I1217 20:30:00.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:00.822678  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:01.322375  522827 type.go:168] "Request Body" body=""
	I1217 20:30:01.322459  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:01.322808  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:01.822288  522827 type.go:168] "Request Body" body=""
	I1217 20:30:01.822382  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:01.822690  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:01.822741  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:02.049201  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:30:02.136097  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:02.136138  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:02.136156  522827 retry.go:31] will retry after 5.567678678s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:02.322750  522827 type.go:168] "Request Body" body=""
	I1217 20:30:02.322843  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:02.323173  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:02.822939  522827 type.go:168] "Request Body" body=""
	I1217 20:30:02.823008  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:02.823350  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:03.323175  522827 type.go:168] "Request Body" body=""
	I1217 20:30:03.323258  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:03.323612  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:03.822172  522827 type.go:168] "Request Body" body=""
	I1217 20:30:03.822257  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:03.822603  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:04.322314  522827 type.go:168] "Request Body" body=""
	I1217 20:30:04.322401  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:04.322723  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:04.322781  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:04.822229  522827 type.go:168] "Request Body" body=""
	I1217 20:30:04.822306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:04.822675  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:05.322398  522827 type.go:168] "Request Body" body=""
	I1217 20:30:05.322478  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:05.322839  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:05.346115  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:30:05.408232  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:05.408289  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:05.408313  522827 retry.go:31] will retry after 10.078206747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:05.822854  522827 type.go:168] "Request Body" body=""
	I1217 20:30:05.822945  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:05.823317  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:06.323102  522827 type.go:168] "Request Body" body=""
	I1217 20:30:06.323172  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:06.323471  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:06.323519  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:06.822291  522827 type.go:168] "Request Body" body=""
	I1217 20:30:06.822371  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:06.822701  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:07.322790  522827 type.go:168] "Request Body" body=""
	I1217 20:30:07.322867  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:07.323162  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:07.703974  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:30:07.764647  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:07.764701  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:07.764721  522827 retry.go:31] will retry after 19.009086903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:07.822843  522827 type.go:168] "Request Body" body=""
	I1217 20:30:07.822915  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:07.823267  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:08.323079  522827 type.go:168] "Request Body" body=""
	I1217 20:30:08.323151  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:08.323471  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:08.822188  522827 type.go:168] "Request Body" body=""
	I1217 20:30:08.822263  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:08.822521  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:08.822572  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:09.322241  522827 type.go:168] "Request Body" body=""
	I1217 20:30:09.322340  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:09.322671  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:09.822374  522827 type.go:168] "Request Body" body=""
	I1217 20:30:09.822457  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:09.822805  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:10.322483  522827 type.go:168] "Request Body" body=""
	I1217 20:30:10.322552  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:10.322843  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:10.822281  522827 type.go:168] "Request Body" body=""
	I1217 20:30:10.822358  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:10.822652  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:10.822700  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:11.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:30:11.322352  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:11.322672  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:11.822207  522827 type.go:168] "Request Body" body=""
	I1217 20:30:11.822286  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:11.822549  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:12.322594  522827 type.go:168] "Request Body" body=""
	I1217 20:30:12.322674  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:12.322988  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:12.822976  522827 type.go:168] "Request Body" body=""
	I1217 20:30:12.823050  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:12.823410  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:12.823463  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:13.322144  522827 type.go:168] "Request Body" body=""
	I1217 20:30:13.322232  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:13.322521  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:13.822230  522827 type.go:168] "Request Body" body=""
	I1217 20:30:13.822307  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:13.822623  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:14.322243  522827 type.go:168] "Request Body" body=""
	I1217 20:30:14.322319  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:14.322666  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:14.822203  522827 type.go:168] "Request Body" body=""
	I1217 20:30:14.822311  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:14.822605  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:15.322246  522827 type.go:168] "Request Body" body=""
	I1217 20:30:15.322320  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:15.322647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:15.322700  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:15.487149  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:30:15.557091  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:15.557136  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:15.557155  522827 retry.go:31] will retry after 12.964696684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:15.822271  522827 type.go:168] "Request Body" body=""
	I1217 20:30:15.822351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:15.822662  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:16.322350  522827 type.go:168] "Request Body" body=""
	I1217 20:30:16.322453  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:16.322760  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:16.822273  522827 type.go:168] "Request Body" body=""
	I1217 20:30:16.822358  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:16.822736  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:17.322687  522827 type.go:168] "Request Body" body=""
	I1217 20:30:17.322762  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:17.323107  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:17.323170  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:17.822929  522827 type.go:168] "Request Body" body=""
	I1217 20:30:17.823010  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:17.823369  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:18.322156  522827 type.go:168] "Request Body" body=""
	I1217 20:30:18.322228  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:18.322549  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:18.822291  522827 type.go:168] "Request Body" body=""
	I1217 20:30:18.822375  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:18.822749  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:19.322331  522827 type.go:168] "Request Body" body=""
	I1217 20:30:19.322407  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:19.322669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:19.822262  522827 type.go:168] "Request Body" body=""
	I1217 20:30:19.822338  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:19.822665  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:19.822723  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:20.322409  522827 type.go:168] "Request Body" body=""
	I1217 20:30:20.322504  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:20.322816  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:20.822195  522827 type.go:168] "Request Body" body=""
	I1217 20:30:20.822272  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:20.822589  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:21.322282  522827 type.go:168] "Request Body" body=""
	I1217 20:30:21.322360  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:21.322714  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:21.822462  522827 type.go:168] "Request Body" body=""
	I1217 20:30:21.822537  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:21.822878  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:21.822935  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:22.322758  522827 type.go:168] "Request Body" body=""
	I1217 20:30:22.322831  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:22.323144  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:22.823099  522827 type.go:168] "Request Body" body=""
	I1217 20:30:22.823175  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:22.823543  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:23.322157  522827 type.go:168] "Request Body" body=""
	I1217 20:30:23.322244  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:23.322584  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:23.822276  522827 type.go:168] "Request Body" body=""
	I1217 20:30:23.822380  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:23.822675  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:24.322366  522827 type.go:168] "Request Body" body=""
	I1217 20:30:24.322440  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:24.322775  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:24.322830  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:24.822521  522827 type.go:168] "Request Body" body=""
	I1217 20:30:24.822606  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:24.822923  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:25.322198  522827 type.go:168] "Request Body" body=""
	I1217 20:30:25.322283  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:25.322621  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:25.822315  522827 type.go:168] "Request Body" body=""
	I1217 20:30:25.822391  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:25.822741  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:26.322318  522827 type.go:168] "Request Body" body=""
	I1217 20:30:26.322399  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:26.322716  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:26.774084  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:30:26.822641  522827 type.go:168] "Request Body" body=""
	I1217 20:30:26.822719  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:26.822976  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:26.823028  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:26.837910  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:26.841500  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:26.841530  522827 retry.go:31] will retry after 11.131595667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:27.322446  522827 type.go:168] "Request Body" body=""
	I1217 20:30:27.322527  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:27.322849  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:27.822542  522827 type.go:168] "Request Body" body=""
	I1217 20:30:27.822619  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:27.822938  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:28.322188  522827 type.go:168] "Request Body" body=""
	I1217 20:30:28.322255  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:28.322560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:28.523062  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:30:28.580613  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:28.584486  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:28.584522  522827 retry.go:31] will retry after 27.188888106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:28.822927  522827 type.go:168] "Request Body" body=""
	I1217 20:30:28.823014  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:28.823356  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:28.823415  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:29.323074  522827 type.go:168] "Request Body" body=""
	I1217 20:30:29.323146  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:29.323504  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:29.822233  522827 type.go:168] "Request Body" body=""
	I1217 20:30:29.822317  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:29.822584  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:30.322255  522827 type.go:168] "Request Body" body=""
	I1217 20:30:30.322340  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:30.322682  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:30.822323  522827 type.go:168] "Request Body" body=""
	I1217 20:30:30.822402  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:30.822702  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:31.322380  522827 type.go:168] "Request Body" body=""
	I1217 20:30:31.322461  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:31.322751  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:31.322805  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:31.822248  522827 type.go:168] "Request Body" body=""
	I1217 20:30:31.822328  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:31.822687  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:32.322527  522827 type.go:168] "Request Body" body=""
	I1217 20:30:32.322604  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:32.322970  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:32.822789  522827 type.go:168] "Request Body" body=""
	I1217 20:30:32.822862  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:32.823113  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:33.322853  522827 type.go:168] "Request Body" body=""
	I1217 20:30:33.322933  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:33.323261  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:33.323318  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:33.823136  522827 type.go:168] "Request Body" body=""
	I1217 20:30:33.823234  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:33.823604  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:34.322280  522827 type.go:168] "Request Body" body=""
	I1217 20:30:34.322349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:34.322657  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:34.822228  522827 type.go:168] "Request Body" body=""
	I1217 20:30:34.822303  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:34.822672  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:35.322420  522827 type.go:168] "Request Body" body=""
	I1217 20:30:35.322511  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:35.322908  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:35.822529  522827 type.go:168] "Request Body" body=""
	I1217 20:30:35.822596  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:35.822853  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:35.822892  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:36.322252  522827 type.go:168] "Request Body" body=""
	I1217 20:30:36.322343  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:36.322684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:36.822246  522827 type.go:168] "Request Body" body=""
	I1217 20:30:36.822323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:36.822689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:37.322549  522827 type.go:168] "Request Body" body=""
	I1217 20:30:37.322619  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:37.322889  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:37.822237  522827 type.go:168] "Request Body" body=""
	I1217 20:30:37.822321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:37.822668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:37.974039  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:30:38.040817  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:38.040869  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:38.040889  522827 retry.go:31] will retry after 31.049103728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:38.322172  522827 type.go:168] "Request Body" body=""
	I1217 20:30:38.322246  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:38.322560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:38.322614  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:38.822324  522827 type.go:168] "Request Body" body=""
	I1217 20:30:38.822398  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:38.822724  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:39.322269  522827 type.go:168] "Request Body" body=""
	I1217 20:30:39.322343  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:39.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:39.822351  522827 type.go:168] "Request Body" body=""
	I1217 20:30:39.822429  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:39.822784  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:40.322493  522827 type.go:168] "Request Body" body=""
	I1217 20:30:40.322565  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:40.322832  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:40.322881  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:40.822257  522827 type.go:168] "Request Body" body=""
	I1217 20:30:40.822336  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:40.822683  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:41.322402  522827 type.go:168] "Request Body" body=""
	I1217 20:30:41.322476  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:41.322798  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:41.822322  522827 type.go:168] "Request Body" body=""
	I1217 20:30:41.822410  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:41.822673  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:42.322668  522827 type.go:168] "Request Body" body=""
	I1217 20:30:42.322753  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:42.323078  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:42.323134  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:42.822880  522827 type.go:168] "Request Body" body=""
	I1217 20:30:42.822964  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:42.823451  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:43.322210  522827 type.go:168] "Request Body" body=""
	I1217 20:30:43.322284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:43.322583  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:43.822235  522827 type.go:168] "Request Body" body=""
	I1217 20:30:43.822309  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:43.822654  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:44.322386  522827 type.go:168] "Request Body" body=""
	I1217 20:30:44.322461  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:44.322790  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:44.822318  522827 type.go:168] "Request Body" body=""
	I1217 20:30:44.822384  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:44.822682  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:44.822724  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:45.322416  522827 type.go:168] "Request Body" body=""
	I1217 20:30:45.322496  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:45.322829  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:45.822255  522827 type.go:168] "Request Body" body=""
	I1217 20:30:45.822332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:45.822669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:46.322325  522827 type.go:168] "Request Body" body=""
	I1217 20:30:46.322400  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:46.322665  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:46.822380  522827 type.go:168] "Request Body" body=""
	I1217 20:30:46.822473  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:46.822817  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:46.822872  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:47.322661  522827 type.go:168] "Request Body" body=""
	I1217 20:30:47.322735  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:47.323065  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:47.822781  522827 type.go:168] "Request Body" body=""
	I1217 20:30:47.822857  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:47.823119  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:48.322897  522827 type.go:168] "Request Body" body=""
	I1217 20:30:48.322974  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:48.323345  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:48.823144  522827 type.go:168] "Request Body" body=""
	I1217 20:30:48.823234  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:48.823560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:48.823640  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:49.322261  522827 type.go:168] "Request Body" body=""
	I1217 20:30:49.322332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:49.322595  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:49.822322  522827 type.go:168] "Request Body" body=""
	I1217 20:30:49.822426  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:49.822794  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:50.322509  522827 type.go:168] "Request Body" body=""
	I1217 20:30:50.322590  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:50.322932  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:50.822546  522827 type.go:168] "Request Body" body=""
	I1217 20:30:50.822615  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:50.822946  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:51.322643  522827 type.go:168] "Request Body" body=""
	I1217 20:30:51.322718  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:51.323024  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:51.323070  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:51.822296  522827 type.go:168] "Request Body" body=""
	I1217 20:30:51.822374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:51.822687  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:52.322694  522827 type.go:168] "Request Body" body=""
	I1217 20:30:52.322784  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:52.323124  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:52.823081  522827 type.go:168] "Request Body" body=""
	I1217 20:30:52.823156  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:52.823526  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:53.322244  522827 type.go:168] "Request Body" body=""
	I1217 20:30:53.322320  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:53.322655  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:53.822344  522827 type.go:168] "Request Body" body=""
	I1217 20:30:53.822418  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:53.822697  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:53.822743  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:54.322237  522827 type.go:168] "Request Body" body=""
	I1217 20:30:54.322316  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:54.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:54.822361  522827 type.go:168] "Request Body" body=""
	I1217 20:30:54.822444  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:54.822766  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:55.322219  522827 type.go:168] "Request Body" body=""
	I1217 20:30:55.322295  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:55.322552  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:55.774295  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:30:55.822774  522827 type.go:168] "Request Body" body=""
	I1217 20:30:55.822854  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:55.823178  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:55.823237  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:55.835665  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:55.835703  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:55.835722  522827 retry.go:31] will retry after 28.301795669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:56.322365  522827 type.go:168] "Request Body" body=""
	I1217 20:30:56.322444  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:56.322778  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:56.822439  522827 type.go:168] "Request Body" body=""
	I1217 20:30:56.822508  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:56.822820  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:57.322747  522827 type.go:168] "Request Body" body=""
	I1217 20:30:57.322819  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:57.323147  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:57.822918  522827 type.go:168] "Request Body" body=""
	I1217 20:30:57.822997  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:57.823341  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:57.823393  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:58.322987  522827 type.go:168] "Request Body" body=""
	I1217 20:30:58.323064  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:58.323342  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:58.823128  522827 type.go:168] "Request Body" body=""
	I1217 20:30:58.823221  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:58.823576  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:59.322297  522827 type.go:168] "Request Body" body=""
	I1217 20:30:59.322372  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:59.322714  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:59.822279  522827 type.go:168] "Request Body" body=""
	I1217 20:30:59.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:59.822614  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:00.322456  522827 type.go:168] "Request Body" body=""
	I1217 20:31:00.322571  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:00.322881  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:00.322948  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:00.822606  522827 type.go:168] "Request Body" body=""
	I1217 20:31:00.822685  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:00.823029  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:01.322805  522827 type.go:168] "Request Body" body=""
	I1217 20:31:01.322882  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:01.323144  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:01.822946  522827 type.go:168] "Request Body" body=""
	I1217 20:31:01.823024  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:01.823411  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:02.322195  522827 type.go:168] "Request Body" body=""
	I1217 20:31:02.322276  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:02.322604  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:02.822463  522827 type.go:168] "Request Body" body=""
	I1217 20:31:02.822531  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:02.822797  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:02.822839  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:03.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:31:03.322304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:03.322643  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:03.822222  522827 type.go:168] "Request Body" body=""
	I1217 20:31:03.822303  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:03.822665  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:04.322217  522827 type.go:168] "Request Body" body=""
	I1217 20:31:04.322297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:04.322674  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:04.822410  522827 type.go:168] "Request Body" body=""
	I1217 20:31:04.822489  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:04.822833  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:04.822889  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:05.322559  522827 type.go:168] "Request Body" body=""
	I1217 20:31:05.322643  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:05.323009  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:05.822714  522827 type.go:168] "Request Body" body=""
	I1217 20:31:05.822789  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:05.823090  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:06.322858  522827 type.go:168] "Request Body" body=""
	I1217 20:31:06.322935  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:06.323252  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:06.823001  522827 type.go:168] "Request Body" body=""
	I1217 20:31:06.823088  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:06.823427  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:06.823482  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:07.322676  522827 type.go:168] "Request Body" body=""
	I1217 20:31:07.322779  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:07.323088  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:07.822882  522827 type.go:168] "Request Body" body=""
	I1217 20:31:07.822978  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:07.823462  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:08.322191  522827 type.go:168] "Request Body" body=""
	I1217 20:31:08.322284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:08.322582  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:08.822182  522827 type.go:168] "Request Body" body=""
	I1217 20:31:08.822262  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:08.822524  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:09.091155  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:31:09.152330  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:31:09.155944  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:31:09.156044  522827 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 20:31:09.322225  522827 type.go:168] "Request Body" body=""
	I1217 20:31:09.322322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:09.322666  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:09.322722  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:09.822389  522827 type.go:168] "Request Body" body=""
	I1217 20:31:09.822485  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:09.822808  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:10.322485  522827 type.go:168] "Request Body" body=""
	I1217 20:31:10.322557  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:10.322813  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:10.822228  522827 type.go:168] "Request Body" body=""
	I1217 20:31:10.822305  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:10.822670  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:11.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:31:11.322323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:11.322659  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:11.822317  522827 type.go:168] "Request Body" body=""
	I1217 20:31:11.822395  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:11.822653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:11.822709  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:12.322704  522827 type.go:168] "Request Body" body=""
	I1217 20:31:12.322778  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:12.323076  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:12.822968  522827 type.go:168] "Request Body" body=""
	I1217 20:31:12.823050  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:12.823387  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:13.323001  522827 type.go:168] "Request Body" body=""
	I1217 20:31:13.323088  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:13.323368  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:13.823235  522827 type.go:168] "Request Body" body=""
	I1217 20:31:13.823315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:13.823670  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:13.823726  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:14.322222  522827 type.go:168] "Request Body" body=""
	I1217 20:31:14.322295  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:14.322647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:14.822221  522827 type.go:168] "Request Body" body=""
	I1217 20:31:14.822300  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:14.822581  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:15.322323  522827 type.go:168] "Request Body" body=""
	I1217 20:31:15.322403  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:15.322715  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:15.822407  522827 type.go:168] "Request Body" body=""
	I1217 20:31:15.822512  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:15.822811  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:16.322304  522827 type.go:168] "Request Body" body=""
	I1217 20:31:16.322385  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:16.322637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:16.322683  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:16.822297  522827 type.go:168] "Request Body" body=""
	I1217 20:31:16.822416  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:16.822748  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:17.322737  522827 type.go:168] "Request Body" body=""
	I1217 20:31:17.322810  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:17.323096  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:17.822837  522827 type.go:168] "Request Body" body=""
	I1217 20:31:17.822931  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:17.823257  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:18.323065  522827 type.go:168] "Request Body" body=""
	I1217 20:31:18.323140  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:18.323508  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:18.323570  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:18.822258  522827 type.go:168] "Request Body" body=""
	I1217 20:31:18.822342  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:18.822689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:19.322395  522827 type.go:168] "Request Body" body=""
	I1217 20:31:19.322475  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:19.322822  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:19.822246  522827 type.go:168] "Request Body" body=""
	I1217 20:31:19.822344  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:19.822647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:20.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:31:20.322363  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:20.322714  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:20.822389  522827 type.go:168] "Request Body" body=""
	I1217 20:31:20.822466  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:20.822785  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:20.822834  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:21.322233  522827 type.go:168] "Request Body" body=""
	I1217 20:31:21.322331  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:21.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:21.822347  522827 type.go:168] "Request Body" body=""
	I1217 20:31:21.822422  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:21.822747  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:22.322631  522827 type.go:168] "Request Body" body=""
	I1217 20:31:22.322703  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:22.322965  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:22.822936  522827 type.go:168] "Request Body" body=""
	I1217 20:31:22.823012  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:22.823323  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:22.823370  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:23.323099  522827 type.go:168] "Request Body" body=""
	I1217 20:31:23.323180  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:23.323479  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:23.822130  522827 type.go:168] "Request Body" body=""
	I1217 20:31:23.822204  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:23.822471  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:24.138134  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:31:24.201991  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:31:24.202036  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:31:24.202117  522827 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 20:31:24.205262  522827 out.go:179] * Enabled addons: 
	I1217 20:31:24.208903  522827 addons.go:530] duration metric: took 1m41.560475312s for enable addons: enabled=[]
	I1217 20:31:24.322240  522827 type.go:168] "Request Body" body=""
	I1217 20:31:24.322318  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:24.322668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:24.822384  522827 type.go:168] "Request Body" body=""
	I1217 20:31:24.822478  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:24.822815  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:25.322366  522827 type.go:168] "Request Body" body=""
	I1217 20:31:25.322441  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:25.322753  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:25.322800  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:25.822458  522827 type.go:168] "Request Body" body=""
	I1217 20:31:25.822532  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:25.822902  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:26.322508  522827 type.go:168] "Request Body" body=""
	I1217 20:31:26.322584  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:26.322912  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:26.822194  522827 type.go:168] "Request Body" body=""
	I1217 20:31:26.822272  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:26.822592  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:27.322423  522827 type.go:168] "Request Body" body=""
	I1217 20:31:27.322530  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:27.322841  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:27.322894  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:27.822547  522827 type.go:168] "Request Body" body=""
	I1217 20:31:27.822621  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:27.822984  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:28.322302  522827 type.go:168] "Request Body" body=""
	I1217 20:31:28.322385  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:28.322685  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:28.822382  522827 type.go:168] "Request Body" body=""
	I1217 20:31:28.822464  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:28.822833  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:29.322567  522827 type.go:168] "Request Body" body=""
	I1217 20:31:29.322643  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:29.322987  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:29.323043  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:29.822734  522827 type.go:168] "Request Body" body=""
	I1217 20:31:29.822807  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:29.823076  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:30.322834  522827 type.go:168] "Request Body" body=""
	I1217 20:31:30.322906  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:30.323262  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:30.823096  522827 type.go:168] "Request Body" body=""
	I1217 20:31:30.823184  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:30.823505  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:31.322243  522827 type.go:168] "Request Body" body=""
	I1217 20:31:31.322319  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:31.322606  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:31.822221  522827 type.go:168] "Request Body" body=""
	I1217 20:31:31.822295  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:31.822614  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:31.822668  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:32.322585  522827 type.go:168] "Request Body" body=""
	I1217 20:31:32.322665  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:32.322989  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:32.822991  522827 type.go:168] "Request Body" body=""
	I1217 20:31:32.823063  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:32.823325  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:33.323053  522827 type.go:168] "Request Body" body=""
	I1217 20:31:33.323151  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:33.323496  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:33.822867  522827 type.go:168] "Request Body" body=""
	I1217 20:31:33.822946  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:33.823324  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:33.823391  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:34.323215  522827 type.go:168] "Request Body" body=""
	I1217 20:31:34.323300  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:34.323630  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:34.822311  522827 type.go:168] "Request Body" body=""
	I1217 20:31:34.822386  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:34.822717  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:35.322215  522827 type.go:168] "Request Body" body=""
	I1217 20:31:35.322293  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:35.322668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:35.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:31:35.822284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:35.822539  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:36.322256  522827 type.go:168] "Request Body" body=""
	I1217 20:31:36.322344  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:36.322708  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:36.322778  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:36.822306  522827 type.go:168] "Request Body" body=""
	I1217 20:31:36.822387  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:36.822729  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:37.322707  522827 type.go:168] "Request Body" body=""
	I1217 20:31:37.322775  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:37.323029  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:37.822287  522827 type.go:168] "Request Body" body=""
	I1217 20:31:37.822373  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:37.823676  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 20:31:38.322400  522827 type.go:168] "Request Body" body=""
	I1217 20:31:38.322477  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:38.322802  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:38.322850  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:38.822462  522827 type.go:168] "Request Body" body=""
	I1217 20:31:38.822552  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:38.822813  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:39.322538  522827 type.go:168] "Request Body" body=""
	I1217 20:31:39.322613  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:39.322992  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:39.822813  522827 type.go:168] "Request Body" body=""
	I1217 20:31:39.822889  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:39.823220  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:40.322969  522827 type.go:168] "Request Body" body=""
	I1217 20:31:40.323049  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:40.323311  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:40.323365  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:40.823132  522827 type.go:168] "Request Body" body=""
	I1217 20:31:40.823213  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:40.823556  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:41.322295  522827 type.go:168] "Request Body" body=""
	I1217 20:31:41.322379  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:41.322741  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:41.822257  522827 type.go:168] "Request Body" body=""
	I1217 20:31:41.822325  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:41.822584  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:42.322236  522827 type.go:168] "Request Body" body=""
	I1217 20:31:42.322354  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:42.322714  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:42.822359  522827 type.go:168] "Request Body" body=""
	I1217 20:31:42.822447  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:42.822773  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:42.822824  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:43.322198  522827 type.go:168] "Request Body" body=""
	I1217 20:31:43.322269  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:43.322552  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:43.822223  522827 type.go:168] "Request Body" body=""
	I1217 20:31:43.822299  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:43.822646  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:44.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:31:44.322310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:44.322646  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:44.822277  522827 type.go:168] "Request Body" body=""
	I1217 20:31:44.822351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:44.822615  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:45.322247  522827 type.go:168] "Request Body" body=""
	I1217 20:31:45.322349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:45.322649  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:45.322699  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:45.822364  522827 type.go:168] "Request Body" body=""
	I1217 20:31:45.822446  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:45.822791  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:46.322336  522827 type.go:168] "Request Body" body=""
	I1217 20:31:46.322408  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:46.322712  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:46.822435  522827 type.go:168] "Request Body" body=""
	I1217 20:31:46.822522  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:46.822879  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:47.322808  522827 type.go:168] "Request Body" body=""
	I1217 20:31:47.322888  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:47.323217  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:47.323277  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:47.823026  522827 type.go:168] "Request Body" body=""
	I1217 20:31:47.823100  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:47.823372  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:48.323164  522827 type.go:168] "Request Body" body=""
	I1217 20:31:48.323244  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:48.323562  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:48.822251  522827 type.go:168] "Request Body" body=""
	I1217 20:31:48.822329  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:48.822696  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:49.322381  522827 type.go:168] "Request Body" body=""
	I1217 20:31:49.322457  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:49.322785  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:49.822503  522827 type.go:168] "Request Body" body=""
	I1217 20:31:49.822582  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:49.822896  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:49.822946  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:50.322276  522827 type.go:168] "Request Body" body=""
	I1217 20:31:50.322366  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:50.322737  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:50.822198  522827 type.go:168] "Request Body" body=""
	I1217 20:31:50.822270  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:50.822542  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:51.322250  522827 type.go:168] "Request Body" body=""
	I1217 20:31:51.322336  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:51.322688  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:51.822279  522827 type.go:168] "Request Body" body=""
	I1217 20:31:51.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:51.822647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:52.322172  522827 type.go:168] "Request Body" body=""
	I1217 20:31:52.322269  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:52.322529  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:52.322584  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:52.822307  522827 type.go:168] "Request Body" body=""
	I1217 20:31:52.822381  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:52.822703  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:53.322352  522827 type.go:168] "Request Body" body=""
	I1217 20:31:53.322443  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:53.322765  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:53.822450  522827 type.go:168] "Request Body" body=""
	I1217 20:31:53.822519  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:53.822836  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:54.322259  522827 type.go:168] "Request Body" body=""
	I1217 20:31:54.322342  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:54.322677  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:54.322737  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:54.822413  522827 type.go:168] "Request Body" body=""
	I1217 20:31:54.822500  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:54.822844  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:55.322509  522827 type.go:168] "Request Body" body=""
	I1217 20:31:55.322590  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:55.322859  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:55.822209  522827 type.go:168] "Request Body" body=""
	I1217 20:31:55.822286  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:55.822615  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:56.322334  522827 type.go:168] "Request Body" body=""
	I1217 20:31:56.322412  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:56.322700  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:56.822188  522827 type.go:168] "Request Body" body=""
	I1217 20:31:56.822256  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:56.822570  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:56.822617  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:57.322493  522827 type.go:168] "Request Body" body=""
	I1217 20:31:57.322571  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:57.322891  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:57.822474  522827 type.go:168] "Request Body" body=""
	I1217 20:31:57.822550  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:57.822881  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:58.322311  522827 type.go:168] "Request Body" body=""
	I1217 20:31:58.322386  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:58.322639  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:58.822249  522827 type.go:168] "Request Body" body=""
	I1217 20:31:58.822326  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:58.822659  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:58.822714  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:59.322242  522827 type.go:168] "Request Body" body=""
	I1217 20:31:59.322321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:59.322689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:59.822316  522827 type.go:168] "Request Body" body=""
	I1217 20:31:59.822386  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:59.822717  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:00.322382  522827 type.go:168] "Request Body" body=""
	I1217 20:32:00.322473  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:00.322812  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:00.822290  522827 type.go:168] "Request Body" body=""
	I1217 20:32:00.822374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:00.822752  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:00.822812  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:01.322354  522827 type.go:168] "Request Body" body=""
	I1217 20:32:01.322434  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:01.322743  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:01.822229  522827 type.go:168] "Request Body" body=""
	I1217 20:32:01.822312  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:01.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:02.322687  522827 type.go:168] "Request Body" body=""
	I1217 20:32:02.322779  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:02.323110  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:02.823078  522827 type.go:168] "Request Body" body=""
	I1217 20:32:02.823185  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:02.823454  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:02.823500  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:03.322198  522827 type.go:168] "Request Body" body=""
	I1217 20:32:03.322280  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:03.322619  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:03.822356  522827 type.go:168] "Request Body" body=""
	I1217 20:32:03.822434  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:03.822736  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:04.322315  522827 type.go:168] "Request Body" body=""
	I1217 20:32:04.322389  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:04.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:04.822260  522827 type.go:168] "Request Body" body=""
	I1217 20:32:04.822366  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:04.822762  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:05.322471  522827 type.go:168] "Request Body" body=""
	I1217 20:32:05.322560  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:05.322916  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:05.322977  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:05.822615  522827 type.go:168] "Request Body" body=""
	I1217 20:32:05.822691  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:05.823031  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:06.322818  522827 type.go:168] "Request Body" body=""
	I1217 20:32:06.322895  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:06.323223  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:06.822995  522827 type.go:168] "Request Body" body=""
	I1217 20:32:06.823069  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:06.823419  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:07.322171  522827 type.go:168] "Request Body" body=""
	I1217 20:32:07.322242  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:07.322555  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:07.822236  522827 type.go:168] "Request Body" body=""
	I1217 20:32:07.822316  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:07.822639  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:07.822694  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:08.322234  522827 type.go:168] "Request Body" body=""
	I1217 20:32:08.322313  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:08.322610  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:08.822290  522827 type.go:168] "Request Body" body=""
	I1217 20:32:08.822368  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:08.822630  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:09.322201  522827 type.go:168] "Request Body" body=""
	I1217 20:32:09.322283  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:09.322629  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:09.822331  522827 type.go:168] "Request Body" body=""
	I1217 20:32:09.822412  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:09.822739  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:09.822812  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:10.322224  522827 type.go:168] "Request Body" body=""
	I1217 20:32:10.322297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:10.322657  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:10.822387  522827 type.go:168] "Request Body" body=""
	I1217 20:32:10.822470  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:10.822875  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:11.322246  522827 type.go:168] "Request Body" body=""
	I1217 20:32:11.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:11.322696  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:11.822377  522827 type.go:168] "Request Body" body=""
	I1217 20:32:11.822447  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:11.822730  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:12.322684  522827 type.go:168] "Request Body" body=""
	I1217 20:32:12.322757  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:12.323075  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:12.323135  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:12.823123  522827 type.go:168] "Request Body" body=""
	I1217 20:32:12.823215  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:12.823567  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:13.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:32:13.322361  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:13.322653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:13.822251  522827 type.go:168] "Request Body" body=""
	I1217 20:32:13.822330  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:13.822693  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:14.322242  522827 type.go:168] "Request Body" body=""
	I1217 20:32:14.322324  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:14.322673  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:14.822355  522827 type.go:168] "Request Body" body=""
	I1217 20:32:14.822428  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:14.822689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:14.822736  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:15.322245  522827 type.go:168] "Request Body" body=""
	I1217 20:32:15.322322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:15.322646  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:15.822224  522827 type.go:168] "Request Body" body=""
	I1217 20:32:15.822301  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:15.822625  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:16.322176  522827 type.go:168] "Request Body" body=""
	I1217 20:32:16.322257  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:16.322573  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:16.822265  522827 type.go:168] "Request Body" body=""
	I1217 20:32:16.822341  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:16.822656  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:17.322600  522827 type.go:168] "Request Body" body=""
	I1217 20:32:17.322693  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:17.323051  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:17.323108  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:17.822821  522827 type.go:168] "Request Body" body=""
	I1217 20:32:17.822890  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:17.823195  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:18.322987  522827 type.go:168] "Request Body" body=""
	I1217 20:32:18.323062  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:18.323387  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:18.823193  522827 type.go:168] "Request Body" body=""
	I1217 20:32:18.823271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:18.823632  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:19.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:32:19.322300  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:19.322563  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:19.822248  522827 type.go:168] "Request Body" body=""
	I1217 20:32:19.822332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:19.822689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:19.822743  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:20.322270  522827 type.go:168] "Request Body" body=""
	I1217 20:32:20.322351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:20.322706  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:20.822403  522827 type.go:168] "Request Body" body=""
	I1217 20:32:20.822483  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:20.822759  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:21.322436  522827 type.go:168] "Request Body" body=""
	I1217 20:32:21.322518  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:21.322864  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:21.822578  522827 type.go:168] "Request Body" body=""
	I1217 20:32:21.822655  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:21.823020  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:21.823078  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:22.322774  522827 type.go:168] "Request Body" body=""
	I1217 20:32:22.322847  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:22.323116  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:22.823126  522827 type.go:168] "Request Body" body=""
	I1217 20:32:22.823213  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:22.823625  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:23.322331  522827 type.go:168] "Request Body" body=""
	I1217 20:32:23.322407  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:23.322751  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:23.822449  522827 type.go:168] "Request Body" body=""
	I1217 20:32:23.822519  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:23.822856  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:24.322228  522827 type.go:168] "Request Body" body=""
	I1217 20:32:24.322310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:24.322653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:24.322710  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:24.822249  522827 type.go:168] "Request Body" body=""
	I1217 20:32:24.822326  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:24.822711  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:25.322197  522827 type.go:168] "Request Body" body=""
	I1217 20:32:25.322269  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:25.322562  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:25.822261  522827 type.go:168] "Request Body" body=""
	I1217 20:32:25.822338  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:25.822615  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:26.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:32:26.322347  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:26.322669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:26.822294  522827 type.go:168] "Request Body" body=""
	I1217 20:32:26.822375  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:26.822659  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:26.822711  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:27.322690  522827 type.go:168] "Request Body" body=""
	I1217 20:32:27.322770  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:27.323105  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:27.822647  522827 type.go:168] "Request Body" body=""
	I1217 20:32:27.822726  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:27.823033  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:28.322766  522827 type.go:168] "Request Body" body=""
	I1217 20:32:28.322834  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:28.323196  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:28.822977  522827 type.go:168] "Request Body" body=""
	I1217 20:32:28.823055  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:28.823384  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:28.823437  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:29.322124  522827 type.go:168] "Request Body" body=""
	I1217 20:32:29.322205  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:29.322530  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:29.822227  522827 type.go:168] "Request Body" body=""
	I1217 20:32:29.822306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:29.822567  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:30.322239  522827 type.go:168] "Request Body" body=""
	I1217 20:32:30.322320  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:30.322615  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:30.822251  522827 type.go:168] "Request Body" body=""
	I1217 20:32:30.822335  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:30.822684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:31.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:32:31.322297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:31.322575  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:31.322631  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:31.822242  522827 type.go:168] "Request Body" body=""
	I1217 20:32:31.822318  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:31.822645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:32.322646  522827 type.go:168] "Request Body" body=""
	I1217 20:32:32.322717  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:32.323066  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:32.822921  522827 type.go:168] "Request Body" body=""
	I1217 20:32:32.822993  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:32.823283  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:33.323063  522827 type.go:168] "Request Body" body=""
	I1217 20:32:33.323158  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:33.323500  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:33.323569  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:33.822260  522827 type.go:168] "Request Body" body=""
	I1217 20:32:33.822354  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:33.822685  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:34.322405  522827 type.go:168] "Request Body" body=""
	I1217 20:32:34.322478  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:34.322748  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:34.822278  522827 type.go:168] "Request Body" body=""
	I1217 20:32:34.822374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:34.822748  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:35.322476  522827 type.go:168] "Request Body" body=""
	I1217 20:32:35.322570  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:35.322893  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:35.822173  522827 type.go:168] "Request Body" body=""
	I1217 20:32:35.822243  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:35.822502  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:35.822542  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:36.322264  522827 type.go:168] "Request Body" body=""
	I1217 20:32:36.322345  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:36.322701  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:36.822410  522827 type.go:168] "Request Body" body=""
	I1217 20:32:36.822488  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:36.822823  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:37.322668  522827 type.go:168] "Request Body" body=""
	I1217 20:32:37.322737  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:37.322989  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:37.822848  522827 type.go:168] "Request Body" body=""
	I1217 20:32:37.822924  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:37.823275  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:37.823343  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:38.323095  522827 type.go:168] "Request Body" body=""
	I1217 20:32:38.323188  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:38.323541  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:38.822238  522827 type.go:168] "Request Body" body=""
	I1217 20:32:38.822315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:38.822608  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:39.322245  522827 type.go:168] "Request Body" body=""
	I1217 20:32:39.322381  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:39.322729  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:39.822442  522827 type.go:168] "Request Body" body=""
	I1217 20:32:39.822521  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:39.822851  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:40.322537  522827 type.go:168] "Request Body" body=""
	I1217 20:32:40.322611  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:40.322918  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:40.322971  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:40.822252  522827 type.go:168] "Request Body" body=""
	I1217 20:32:40.822327  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:40.822657  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:41.322382  522827 type.go:168] "Request Body" body=""
	I1217 20:32:41.322458  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:41.322791  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:41.822307  522827 type.go:168] "Request Body" body=""
	I1217 20:32:41.822377  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:41.822665  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:42.322693  522827 type.go:168] "Request Body" body=""
	I1217 20:32:42.322766  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:42.323102  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:42.323170  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:42.823022  522827 type.go:168] "Request Body" body=""
	I1217 20:32:42.823123  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:42.823479  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:43.322175  522827 type.go:168] "Request Body" body=""
	I1217 20:32:43.322271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:43.322523  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:43.822319  522827 type.go:168] "Request Body" body=""
	I1217 20:32:43.822415  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:43.822789  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:44.322263  522827 type.go:168] "Request Body" body=""
	I1217 20:32:44.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:44.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:44.822216  522827 type.go:168] "Request Body" body=""
	I1217 20:32:44.822287  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:44.822560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:44.822601  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:45.322398  522827 type.go:168] "Request Body" body=""
	I1217 20:32:45.322606  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:45.323117  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:45.823034  522827 type.go:168] "Request Body" body=""
	I1217 20:32:45.823140  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:45.823517  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:46.322220  522827 type.go:168] "Request Body" body=""
	I1217 20:32:46.322304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:46.322623  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:46.822261  522827 type.go:168] "Request Body" body=""
	I1217 20:32:46.822341  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:46.822687  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:46.822747  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:47.322536  522827 type.go:168] "Request Body" body=""
	I1217 20:32:47.322612  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:47.322939  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:47.822456  522827 type.go:168] "Request Body" body=""
	I1217 20:32:47.822529  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:47.822784  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:48.322237  522827 type.go:168] "Request Body" body=""
	I1217 20:32:48.322319  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:48.322675  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:48.822389  522827 type.go:168] "Request Body" body=""
	I1217 20:32:48.822473  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:48.822819  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:48.822885  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:49.322495  522827 type.go:168] "Request Body" body=""
	I1217 20:32:49.322569  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:49.322865  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:49.822558  522827 type.go:168] "Request Body" body=""
	I1217 20:32:49.822637  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:49.822970  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:50.322764  522827 type.go:168] "Request Body" body=""
	I1217 20:32:50.322842  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:50.323193  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:50.822930  522827 type.go:168] "Request Body" body=""
	I1217 20:32:50.823006  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:50.823301  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:50.823453  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:51.322133  522827 type.go:168] "Request Body" body=""
	I1217 20:32:51.322212  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:51.322566  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:51.822287  522827 type.go:168] "Request Body" body=""
	I1217 20:32:51.822362  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:51.822679  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:52.322645  522827 type.go:168] "Request Body" body=""
	I1217 20:32:52.322727  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:52.323054  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:52.823092  522827 type.go:168] "Request Body" body=""
	I1217 20:32:52.823172  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:52.823505  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:52.823559  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:53.322267  522827 type.go:168] "Request Body" body=""
	I1217 20:32:53.322354  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:53.322691  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:53.822229  522827 type.go:168] "Request Body" body=""
	I1217 20:32:53.822299  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:53.822601  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:54.322252  522827 type.go:168] "Request Body" body=""
	I1217 20:32:54.322338  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:54.322639  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:54.822220  522827 type.go:168] "Request Body" body=""
	I1217 20:32:54.822304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:54.822635  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:55.322307  522827 type.go:168] "Request Body" body=""
	I1217 20:32:55.322374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:55.322666  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:55.322723  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:55.822406  522827 type.go:168] "Request Body" body=""
	I1217 20:32:55.822481  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:55.822818  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:56.322513  522827 type.go:168] "Request Body" body=""
	I1217 20:32:56.322588  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:56.322929  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:56.822610  522827 type.go:168] "Request Body" body=""
	I1217 20:32:56.822683  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:56.823008  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:57.322760  522827 type.go:168] "Request Body" body=""
	I1217 20:32:57.322844  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:57.323218  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:57.323276  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:57.823035  522827 type.go:168] "Request Body" body=""
	I1217 20:32:57.823125  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:57.823456  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:58.322253  522827 type.go:168] "Request Body" body=""
	I1217 20:32:58.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:58.322631  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:58.822231  522827 type.go:168] "Request Body" body=""
	I1217 20:32:58.822313  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:58.822643  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:59.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:32:59.322307  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:59.322642  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:59.822186  522827 type.go:168] "Request Body" body=""
	I1217 20:32:59.822256  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:59.822567  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:59.822624  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:00.322321  522827 type.go:168] "Request Body" body=""
	I1217 20:33:00.322425  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:00.322741  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:00.822652  522827 type.go:168] "Request Body" body=""
	I1217 20:33:00.822731  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:00.823058  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:01.322828  522827 type.go:168] "Request Body" body=""
	I1217 20:33:01.322902  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:01.323234  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:01.823025  522827 type.go:168] "Request Body" body=""
	I1217 20:33:01.823111  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:01.823448  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:01.823507  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:02.322504  522827 type.go:168] "Request Body" body=""
	I1217 20:33:02.322584  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:02.322930  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:02.822578  522827 type.go:168] "Request Body" body=""
	I1217 20:33:02.822653  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:02.822924  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:03.322752  522827 type.go:168] "Request Body" body=""
	I1217 20:33:03.322834  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:03.323161  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:03.822980  522827 type.go:168] "Request Body" body=""
	I1217 20:33:03.823059  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:03.823424  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:04.322126  522827 type.go:168] "Request Body" body=""
	I1217 20:33:04.322197  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:04.322455  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:04.322500  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:04.822206  522827 type.go:168] "Request Body" body=""
	I1217 20:33:04.822286  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:04.822623  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:05.322331  522827 type.go:168] "Request Body" body=""
	I1217 20:33:05.322416  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:05.322767  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:05.822465  522827 type.go:168] "Request Body" body=""
	I1217 20:33:05.822544  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:05.822897  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:06.322236  522827 type.go:168] "Request Body" body=""
	I1217 20:33:06.322318  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:06.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:06.322719  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:06.822389  522827 type.go:168] "Request Body" body=""
	I1217 20:33:06.822469  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:06.822803  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:07.322597  522827 type.go:168] "Request Body" body=""
	I1217 20:33:07.322665  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:07.322926  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:07.822204  522827 type.go:168] "Request Body" body=""
	I1217 20:33:07.822282  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:07.822625  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:08.322315  522827 type.go:168] "Request Body" body=""
	I1217 20:33:08.322394  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:08.322734  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:08.322788  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:08.822200  522827 type.go:168] "Request Body" body=""
	I1217 20:33:08.822275  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:08.822538  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:09.322257  522827 type.go:168] "Request Body" body=""
	I1217 20:33:09.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:09.322703  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:09.822418  522827 type.go:168] "Request Body" body=""
	I1217 20:33:09.822497  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:09.822851  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:10.322301  522827 type.go:168] "Request Body" body=""
	I1217 20:33:10.322371  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:10.322635  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:10.822269  522827 type.go:168] "Request Body" body=""
	I1217 20:33:10.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:10.822626  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:10.822672  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:11.322251  522827 type.go:168] "Request Body" body=""
	I1217 20:33:11.322332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:11.322653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:11.822193  522827 type.go:168] "Request Body" body=""
	I1217 20:33:11.822295  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:11.822606  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:12.322610  522827 type.go:168] "Request Body" body=""
	I1217 20:33:12.322688  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:12.323024  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:12.822814  522827 type.go:168] "Request Body" body=""
	I1217 20:33:12.822898  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:12.823229  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:12.823291  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:13.323028  522827 type.go:168] "Request Body" body=""
	I1217 20:33:13.323108  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:13.323382  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:13.823191  522827 type.go:168] "Request Body" body=""
	I1217 20:33:13.823271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:13.823643  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:14.322366  522827 type.go:168] "Request Body" body=""
	I1217 20:33:14.322445  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:14.322788  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:14.822460  522827 type.go:168] "Request Body" body=""
	I1217 20:33:14.822538  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:14.822850  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:15.322243  522827 type.go:168] "Request Body" body=""
	I1217 20:33:15.322321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:15.322677  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:15.322736  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:15.822256  522827 type.go:168] "Request Body" body=""
	I1217 20:33:15.822335  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:15.822688  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:16.322376  522827 type.go:168] "Request Body" body=""
	I1217 20:33:16.322452  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:16.322776  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:16.822223  522827 type.go:168] "Request Body" body=""
	I1217 20:33:16.822299  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:16.822647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:17.322462  522827 type.go:168] "Request Body" body=""
	I1217 20:33:17.322538  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:17.322921  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:17.322982  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:17.822190  522827 type.go:168] "Request Body" body=""
	I1217 20:33:17.822267  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:17.822594  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:18.322239  522827 type.go:168] "Request Body" body=""
	I1217 20:33:18.322318  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:18.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:18.822360  522827 type.go:168] "Request Body" body=""
	I1217 20:33:18.822447  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:18.822810  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:19.322194  522827 type.go:168] "Request Body" body=""
	I1217 20:33:19.322274  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:19.322540  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:19.822224  522827 type.go:168] "Request Body" body=""
	I1217 20:33:19.822303  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:19.822648  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:19.822702  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:20.322363  522827 type.go:168] "Request Body" body=""
	I1217 20:33:20.322440  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:20.322810  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:20.822186  522827 type.go:168] "Request Body" body=""
	I1217 20:33:20.822289  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:20.822610  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:21.322209  522827 type.go:168] "Request Body" body=""
	I1217 20:33:21.322284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:21.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:21.822355  522827 type.go:168] "Request Body" body=""
	I1217 20:33:21.822454  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:21.822796  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:21.822847  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:22.322602  522827 type.go:168] "Request Body" body=""
	I1217 20:33:22.322708  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:22.322975  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:22.823014  522827 type.go:168] "Request Body" body=""
	I1217 20:33:22.823104  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:22.823484  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:23.322227  522827 type.go:168] "Request Body" body=""
	I1217 20:33:23.322304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:23.322655  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:23.822334  522827 type.go:168] "Request Body" body=""
	I1217 20:33:23.822402  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:23.822683  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:24.322240  522827 type.go:168] "Request Body" body=""
	I1217 20:33:24.322323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:24.322616  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:24.322662  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:24.822300  522827 type.go:168] "Request Body" body=""
	I1217 20:33:24.822380  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:24.822709  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:25.322192  522827 type.go:168] "Request Body" body=""
	I1217 20:33:25.322263  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:25.322513  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:25.822234  522827 type.go:168] "Request Body" body=""
	I1217 20:33:25.822315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:25.822664  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:26.322370  522827 type.go:168] "Request Body" body=""
	I1217 20:33:26.322443  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:26.322762  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:26.322816  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:26.822197  522827 type.go:168] "Request Body" body=""
	I1217 20:33:26.822271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:26.822589  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:27.322602  522827 type.go:168] "Request Body" body=""
	I1217 20:33:27.322684  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:27.323034  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:27.822840  522827 type.go:168] "Request Body" body=""
	I1217 20:33:27.822919  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:27.823295  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:28.323025  522827 type.go:168] "Request Body" body=""
	I1217 20:33:28.323101  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:28.323352  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:28.323391  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:28.823128  522827 type.go:168] "Request Body" body=""
	I1217 20:33:28.823210  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:28.823616  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:29.322300  522827 type.go:168] "Request Body" body=""
	I1217 20:33:29.322374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:29.322713  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:29.822206  522827 type.go:168] "Request Body" body=""
	I1217 20:33:29.822276  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:29.822536  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:30.322280  522827 type.go:168] "Request Body" body=""
	I1217 20:33:30.322356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:30.322680  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:30.822251  522827 type.go:168] "Request Body" body=""
	I1217 20:33:30.822327  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:30.822668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:30.822720  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:31.322220  522827 type.go:168] "Request Body" body=""
	I1217 20:33:31.322288  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:31.322537  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:31.822223  522827 type.go:168] "Request Body" body=""
	I1217 20:33:31.822304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:31.822622  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:32.322649  522827 type.go:168] "Request Body" body=""
	I1217 20:33:32.322726  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:32.323059  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:32.822867  522827 type.go:168] "Request Body" body=""
	I1217 20:33:32.822952  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:32.823248  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:32.823290  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:33.323108  522827 type.go:168] "Request Body" body=""
	I1217 20:33:33.323186  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:33.323537  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:33.822245  522827 type.go:168] "Request Body" body=""
	I1217 20:33:33.822321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:33.822647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:34.322173  522827 type.go:168] "Request Body" body=""
	I1217 20:33:34.322244  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:34.322543  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:34.822227  522827 type.go:168] "Request Body" body=""
	I1217 20:33:34.822310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:34.822637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:35.322393  522827 type.go:168] "Request Body" body=""
	I1217 20:33:35.322479  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:35.322809  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:35.322867  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:35.822191  522827 type.go:168] "Request Body" body=""
	I1217 20:33:35.822262  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:35.822572  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:36.322306  522827 type.go:168] "Request Body" body=""
	I1217 20:33:36.322382  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:36.322717  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:36.822442  522827 type.go:168] "Request Body" body=""
	I1217 20:33:36.822520  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:36.822854  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:37.322749  522827 type.go:168] "Request Body" body=""
	I1217 20:33:37.322816  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:37.323098  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:37.323140  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:37.822974  522827 type.go:168] "Request Body" body=""
	I1217 20:33:37.823045  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:37.823647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:38.322337  522827 type.go:168] "Request Body" body=""
	I1217 20:33:38.322414  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:38.322731  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:38.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:33:38.822273  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:38.822622  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:39.322260  522827 type.go:168] "Request Body" body=""
	I1217 20:33:39.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:39.322710  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:39.822274  522827 type.go:168] "Request Body" body=""
	I1217 20:33:39.822350  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:39.822691  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:39.822743  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:40.322386  522827 type.go:168] "Request Body" body=""
	I1217 20:33:40.322461  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:40.322777  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:40.822263  522827 type.go:168] "Request Body" body=""
	I1217 20:33:40.822339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:40.822619  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:41.322249  522827 type.go:168] "Request Body" body=""
	I1217 20:33:41.322340  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:41.322697  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:41.822374  522827 type.go:168] "Request Body" body=""
	I1217 20:33:41.822453  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:41.822786  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:41.822845  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:42.322518  522827 type.go:168] "Request Body" body=""
	I1217 20:33:42.322620  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:42.323128  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:42.823194  522827 type.go:168] "Request Body" body=""
	I1217 20:33:42.823280  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:42.823645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:43.322176  522827 type.go:168] "Request Body" body=""
	I1217 20:33:43.322242  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:43.322490  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:43.822197  522827 type.go:168] "Request Body" body=""
	I1217 20:33:43.822292  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:43.822663  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:44.322229  522827 type.go:168] "Request Body" body=""
	I1217 20:33:44.322308  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:44.322678  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:44.322735  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:44.822242  522827 type.go:168] "Request Body" body=""
	I1217 20:33:44.822312  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:44.822572  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:45.322246  522827 type.go:168] "Request Body" body=""
	I1217 20:33:45.322321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:45.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:45.822380  522827 type.go:168] "Request Body" body=""
	I1217 20:33:45.822458  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:45.822809  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:46.322495  522827 type.go:168] "Request Body" body=""
	I1217 20:33:46.322574  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:46.322896  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:46.322955  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:46.822620  522827 type.go:168] "Request Body" body=""
	I1217 20:33:46.822697  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:46.823021  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:47.322811  522827 type.go:168] "Request Body" body=""
	I1217 20:33:47.322892  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:47.323256  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:47.823109  522827 type.go:168] "Request Body" body=""
	I1217 20:33:47.823190  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:47.823487  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:48.322186  522827 type.go:168] "Request Body" body=""
	I1217 20:33:48.322263  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:48.322612  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:48.822323  522827 type.go:168] "Request Body" body=""
	I1217 20:33:48.822399  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:48.822726  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:48.822794  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:49.322196  522827 type.go:168] "Request Body" body=""
	I1217 20:33:49.322273  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:49.322588  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:49.822263  522827 type.go:168] "Request Body" body=""
	I1217 20:33:49.822348  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:49.822724  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:50.322473  522827 type.go:168] "Request Body" body=""
	I1217 20:33:50.322557  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:50.322925  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:50.822209  522827 type.go:168] "Request Body" body=""
	I1217 20:33:50.822284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:50.822560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:51.322238  522827 type.go:168] "Request Body" body=""
	I1217 20:33:51.322322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:51.322661  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:51.322714  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:51.822385  522827 type.go:168] "Request Body" body=""
	I1217 20:33:51.822483  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:51.822831  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:52.322696  522827 type.go:168] "Request Body" body=""
	I1217 20:33:52.322769  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:52.323046  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:52.823035  522827 type.go:168] "Request Body" body=""
	I1217 20:33:52.823114  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:52.823430  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:53.322170  522827 type.go:168] "Request Body" body=""
	I1217 20:33:53.322245  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:53.322568  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:53.822148  522827 type.go:168] "Request Body" body=""
	I1217 20:33:53.822225  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:53.822487  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:53.822527  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:54.322252  522827 type.go:168] "Request Body" body=""
	I1217 20:33:54.322346  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:54.322676  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:54.822391  522827 type.go:168] "Request Body" body=""
	I1217 20:33:54.822487  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:54.822807  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:55.322470  522827 type.go:168] "Request Body" body=""
	I1217 20:33:55.322551  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:55.322876  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:55.822280  522827 type.go:168] "Request Body" body=""
	I1217 20:33:55.822364  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:55.822753  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:55.822813  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:56.322272  522827 type.go:168] "Request Body" body=""
	I1217 20:33:56.322351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:56.322670  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:56.822314  522827 type.go:168] "Request Body" body=""
	I1217 20:33:56.822391  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:56.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:57.322710  522827 type.go:168] "Request Body" body=""
	I1217 20:33:57.322780  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:57.323117  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:57.822916  522827 type.go:168] "Request Body" body=""
	I1217 20:33:57.823001  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:57.823366  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:57.823421  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:58.323148  522827 type.go:168] "Request Body" body=""
	I1217 20:33:58.323218  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:58.323513  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:58.822212  522827 type.go:168] "Request Body" body=""
	I1217 20:33:58.822296  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:58.822656  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:59.322223  522827 type.go:168] "Request Body" body=""
	I1217 20:33:59.322305  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:59.322651  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:59.822221  522827 type.go:168] "Request Body" body=""
	I1217 20:33:59.822297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:59.822641  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:00.322298  522827 type.go:168] "Request Body" body=""
	I1217 20:34:00.322392  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:00.322726  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:00.322782  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:00.822577  522827 type.go:168] "Request Body" body=""
	I1217 20:34:00.822662  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:00.823038  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:01.322657  522827 type.go:168] "Request Body" body=""
	I1217 20:34:01.322731  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:01.323025  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:01.822880  522827 type.go:168] "Request Body" body=""
	I1217 20:34:01.822955  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:01.823320  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:02.323040  522827 type.go:168] "Request Body" body=""
	I1217 20:34:02.323124  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:02.323461  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:02.323514  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:02.822183  522827 type.go:168] "Request Body" body=""
	I1217 20:34:02.822254  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:02.822522  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:03.322245  522827 type.go:168] "Request Body" body=""
	I1217 20:34:03.322323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:03.322656  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:03.822270  522827 type.go:168] "Request Body" body=""
	I1217 20:34:03.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:03.822703  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:04.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:34:04.322351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:04.322622  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:04.822245  522827 type.go:168] "Request Body" body=""
	I1217 20:34:04.822344  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:04.822655  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:04.822707  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:05.322405  522827 type.go:168] "Request Body" body=""
	I1217 20:34:05.322482  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:05.322821  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:05.822296  522827 type.go:168] "Request Body" body=""
	I1217 20:34:05.822365  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:05.822641  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:06.322257  522827 type.go:168] "Request Body" body=""
	I1217 20:34:06.322357  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:06.322688  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:06.822277  522827 type.go:168] "Request Body" body=""
	I1217 20:34:06.822353  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:06.822641  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:07.322615  522827 type.go:168] "Request Body" body=""
	I1217 20:34:07.322701  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:07.322998  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:07.323048  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:07.822861  522827 type.go:168] "Request Body" body=""
	I1217 20:34:07.822938  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:07.823293  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:08.323117  522827 type.go:168] "Request Body" body=""
	I1217 20:34:08.323193  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:08.323537  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:08.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:34:08.822303  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:08.822638  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:09.322217  522827 type.go:168] "Request Body" body=""
	I1217 20:34:09.322290  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:09.322637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:09.822237  522827 type.go:168] "Request Body" body=""
	I1217 20:34:09.822317  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:09.822642  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:09.822697  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:10.322196  522827 type.go:168] "Request Body" body=""
	I1217 20:34:10.322271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:10.322575  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:10.822218  522827 type.go:168] "Request Body" body=""
	I1217 20:34:10.822302  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:10.822644  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:11.322351  522827 type.go:168] "Request Body" body=""
	I1217 20:34:11.322431  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:11.322804  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:11.822287  522827 type.go:168] "Request Body" body=""
	I1217 20:34:11.822357  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:11.822618  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:12.322611  522827 type.go:168] "Request Body" body=""
	I1217 20:34:12.322687  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:12.323025  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:12.323091  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:12.822902  522827 type.go:168] "Request Body" body=""
	I1217 20:34:12.822982  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:12.823336  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:13.323079  522827 type.go:168] "Request Body" body=""
	I1217 20:34:13.323153  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:13.323408  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:13.822161  522827 type.go:168] "Request Body" body=""
	I1217 20:34:13.822240  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:13.822575  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:14.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:34:14.322308  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:14.322650  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:14.822224  522827 type.go:168] "Request Body" body=""
	I1217 20:34:14.822298  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:14.822572  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:14.822622  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:15.322292  522827 type.go:168] "Request Body" body=""
	I1217 20:34:15.322381  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:15.322726  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:15.822430  522827 type.go:168] "Request Body" body=""
	I1217 20:34:15.822518  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:15.822853  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:16.322471  522827 type.go:168] "Request Body" body=""
	I1217 20:34:16.322546  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:16.322836  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:16.822523  522827 type.go:168] "Request Body" body=""
	I1217 20:34:16.822605  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:16.822901  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:16.822951  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:17.322790  522827 type.go:168] "Request Body" body=""
	I1217 20:34:17.322869  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:17.323207  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:17.822955  522827 type.go:168] "Request Body" body=""
	I1217 20:34:17.823029  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:17.823314  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:18.323135  522827 type.go:168] "Request Body" body=""
	I1217 20:34:18.323209  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:18.323507  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:18.822255  522827 type.go:168] "Request Body" body=""
	I1217 20:34:18.822334  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:18.822699  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:19.322387  522827 type.go:168] "Request Body" body=""
	I1217 20:34:19.322457  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:19.322762  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:19.322824  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:19.822236  522827 type.go:168] "Request Body" body=""
	I1217 20:34:19.822309  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:19.822629  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:20.322246  522827 type.go:168] "Request Body" body=""
	I1217 20:34:20.322329  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:20.322668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:20.822198  522827 type.go:168] "Request Body" body=""
	I1217 20:34:20.822275  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:20.822590  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:21.322284  522827 type.go:168] "Request Body" body=""
	I1217 20:34:21.322362  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:21.322710  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:21.822303  522827 type.go:168] "Request Body" body=""
	I1217 20:34:21.822382  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:21.822717  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:21.822772  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:22.322546  522827 type.go:168] "Request Body" body=""
	I1217 20:34:22.322615  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:22.322869  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:22.822850  522827 type.go:168] "Request Body" body=""
	I1217 20:34:22.822926  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:22.823275  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:23.323068  522827 type.go:168] "Request Body" body=""
	I1217 20:34:23.323142  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:23.323472  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:23.822173  522827 type.go:168] "Request Body" body=""
	I1217 20:34:23.822252  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:23.822565  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:24.322250  522827 type.go:168] "Request Body" body=""
	I1217 20:34:24.322333  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:24.322684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:24.322736  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:24.822303  522827 type.go:168] "Request Body" body=""
	I1217 20:34:24.822394  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:24.822738  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:25.322430  522827 type.go:168] "Request Body" body=""
	I1217 20:34:25.322506  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:25.322760  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:25.822245  522827 type.go:168] "Request Body" body=""
	I1217 20:34:25.822324  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:25.822671  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:26.322262  522827 type.go:168] "Request Body" body=""
	I1217 20:34:26.322344  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:26.322685  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:26.822350  522827 type.go:168] "Request Body" body=""
	I1217 20:34:26.822425  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:26.822723  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:26.822775  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:27.322731  522827 type.go:168] "Request Body" body=""
	I1217 20:34:27.322805  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:27.323135  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:27.822789  522827 type.go:168] "Request Body" body=""
	I1217 20:34:27.822869  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:27.823223  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:28.323014  522827 type.go:168] "Request Body" body=""
	I1217 20:34:28.323092  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:28.323358  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:28.823134  522827 type.go:168] "Request Body" body=""
	I1217 20:34:28.823222  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:28.823569  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:28.823650  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:29.322221  522827 type.go:168] "Request Body" body=""
	I1217 20:34:29.322302  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:29.322620  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:29.822198  522827 type.go:168] "Request Body" body=""
	I1217 20:34:29.822278  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:29.822544  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:30.322232  522827 type.go:168] "Request Body" body=""
	I1217 20:34:30.322310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:30.322633  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:30.822346  522827 type.go:168] "Request Body" body=""
	I1217 20:34:30.822427  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:30.822767  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:31.322434  522827 type.go:168] "Request Body" body=""
	I1217 20:34:31.322509  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:31.322812  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:31.322864  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:31.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:34:31.822308  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:31.822637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:32.322630  522827 type.go:168] "Request Body" body=""
	I1217 20:34:32.322703  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:32.323039  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:32.822905  522827 type.go:168] "Request Body" body=""
	I1217 20:34:32.822987  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:32.823335  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:33.323139  522827 type.go:168] "Request Body" body=""
	I1217 20:34:33.323215  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:33.323560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:33.323633  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:33.822242  522827 type.go:168] "Request Body" body=""
	I1217 20:34:33.822322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:33.822657  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:34.322213  522827 type.go:168] "Request Body" body=""
	I1217 20:34:34.322306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:34.322645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:34.822274  522827 type.go:168] "Request Body" body=""
	I1217 20:34:34.822351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:34.822676  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:35.322402  522827 type.go:168] "Request Body" body=""
	I1217 20:34:35.322487  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:35.322839  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:35.822515  522827 type.go:168] "Request Body" body=""
	I1217 20:34:35.822590  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:35.822930  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:35.822983  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:36.322255  522827 type.go:168] "Request Body" body=""
	I1217 20:34:36.322336  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:36.322707  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:36.822275  522827 type.go:168] "Request Body" body=""
	I1217 20:34:36.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:36.822697  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:37.322527  522827 type.go:168] "Request Body" body=""
	I1217 20:34:37.322599  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:37.322871  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:37.822227  522827 type.go:168] "Request Body" body=""
	I1217 20:34:37.822302  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:37.822644  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:38.322240  522827 type.go:168] "Request Body" body=""
	I1217 20:34:38.322315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:38.322686  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:38.322744  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:38.822377  522827 type.go:168] "Request Body" body=""
	I1217 20:34:38.822445  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:38.822700  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:39.322353  522827 type.go:168] "Request Body" body=""
	I1217 20:34:39.322436  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:39.322776  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:39.822486  522827 type.go:168] "Request Body" body=""
	I1217 20:34:39.822576  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:39.822923  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:40.322215  522827 type.go:168] "Request Body" body=""
	I1217 20:34:40.322285  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:40.322627  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:40.822320  522827 type.go:168] "Request Body" body=""
	I1217 20:34:40.822392  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:40.822751  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:40.822813  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:41.322501  522827 type.go:168] "Request Body" body=""
	I1217 20:34:41.322580  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:41.322864  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:41.822296  522827 type.go:168] "Request Body" body=""
	I1217 20:34:41.822379  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:41.822641  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:42.322620  522827 type.go:168] "Request Body" body=""
	I1217 20:34:42.322699  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:42.323049  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:42.822854  522827 type.go:168] "Request Body" body=""
	I1217 20:34:42.822937  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:42.823298  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:42.823352  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:43.322922  522827 type.go:168] "Request Body" body=""
	I1217 20:34:43.322997  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:43.323438  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:43.822136  522827 type.go:168] "Request Body" body=""
	I1217 20:34:43.822214  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:43.822552  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:44.322254  522827 type.go:168] "Request Body" body=""
	I1217 20:34:44.322328  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:44.322684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:44.822377  522827 type.go:168] "Request Body" body=""
	I1217 20:34:44.822446  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:44.822707  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:45.322396  522827 type.go:168] "Request Body" body=""
	I1217 20:34:45.322477  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:45.322826  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:45.322884  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:45.822533  522827 type.go:168] "Request Body" body=""
	I1217 20:34:45.822614  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:45.822967  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:46.322723  522827 type.go:168] "Request Body" body=""
	I1217 20:34:46.322799  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:46.323071  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:46.822878  522827 type.go:168] "Request Body" body=""
	I1217 20:34:46.822963  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:46.823309  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:47.322193  522827 type.go:168] "Request Body" body=""
	I1217 20:34:47.322271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:47.322594  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:47.822176  522827 type.go:168] "Request Body" body=""
	I1217 20:34:47.822253  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:47.822576  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:47.822624  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:48.322276  522827 type.go:168] "Request Body" body=""
	I1217 20:34:48.322360  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:48.322716  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:48.822204  522827 type.go:168] "Request Body" body=""
	I1217 20:34:48.822283  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:48.822652  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:49.322200  522827 type.go:168] "Request Body" body=""
	I1217 20:34:49.322273  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:49.322585  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:49.822198  522827 type.go:168] "Request Body" body=""
	I1217 20:34:49.822275  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:49.822635  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:49.822689  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:50.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:34:50.322310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:50.322638  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:50.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:34:50.822275  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:50.822586  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:51.322215  522827 type.go:168] "Request Body" body=""
	I1217 20:34:51.322292  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:51.322632  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:51.822330  522827 type.go:168] "Request Body" body=""
	I1217 20:34:51.822415  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:51.822753  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:51.822806  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:52.322585  522827 type.go:168] "Request Body" body=""
	I1217 20:34:52.322659  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:52.322934  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:52.822902  522827 type.go:168] "Request Body" body=""
	I1217 20:34:52.822975  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:52.823296  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:53.323063  522827 type.go:168] "Request Body" body=""
	I1217 20:34:53.323136  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:53.323470  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:53.822150  522827 type.go:168] "Request Body" body=""
	I1217 20:34:53.822229  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:53.822559  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:54.322241  522827 type.go:168] "Request Body" body=""
	I1217 20:34:54.322323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:54.322670  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:54.322729  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:54.822224  522827 type.go:168] "Request Body" body=""
	I1217 20:34:54.822309  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:54.822652  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:55.322322  522827 type.go:168] "Request Body" body=""
	I1217 20:34:55.322399  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:55.322652  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:55.822320  522827 type.go:168] "Request Body" body=""
	I1217 20:34:55.822399  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:55.822745  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:56.322224  522827 type.go:168] "Request Body" body=""
	I1217 20:34:56.322302  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:56.322656  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:56.822139  522827 type.go:168] "Request Body" body=""
	I1217 20:34:56.822217  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:56.822522  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:56.822571  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:57.322502  522827 type.go:168] "Request Body" body=""
	I1217 20:34:57.322575  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:57.322903  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:57.822521  522827 type.go:168] "Request Body" body=""
	I1217 20:34:57.822594  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:57.822915  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:58.322195  522827 type.go:168] "Request Body" body=""
	I1217 20:34:58.322269  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:58.322623  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:58.822280  522827 type.go:168] "Request Body" body=""
	I1217 20:34:58.822374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:58.822695  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:58.822745  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:59.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:34:59.322309  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:59.322669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:59.822371  522827 type.go:168] "Request Body" body=""
	I1217 20:34:59.822442  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:59.822756  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:00.322497  522827 type.go:168] "Request Body" body=""
	I1217 20:35:00.322580  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:00.322949  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:00.822996  522827 type.go:168] "Request Body" body=""
	I1217 20:35:00.823083  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:00.823467  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:00.823521  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:01.322212  522827 type.go:168] "Request Body" body=""
	I1217 20:35:01.322285  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:01.322553  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:01.822229  522827 type.go:168] "Request Body" body=""
	I1217 20:35:01.822306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:01.822627  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:02.322626  522827 type.go:168] "Request Body" body=""
	I1217 20:35:02.322709  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:02.323066  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:02.822977  522827 type.go:168] "Request Body" body=""
	I1217 20:35:02.823069  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:02.823348  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:03.323127  522827 type.go:168] "Request Body" body=""
	I1217 20:35:03.323211  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:03.323563  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:03.323642  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:03.822192  522827 type.go:168] "Request Body" body=""
	I1217 20:35:03.822280  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:03.822696  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:04.322196  522827 type.go:168] "Request Body" body=""
	I1217 20:35:04.322276  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:04.322589  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:04.822325  522827 type.go:168] "Request Body" body=""
	I1217 20:35:04.822408  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:04.822706  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:05.322377  522827 type.go:168] "Request Body" body=""
	I1217 20:35:05.322458  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:05.322803  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:05.822315  522827 type.go:168] "Request Body" body=""
	I1217 20:35:05.822428  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:05.822690  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:05.822728  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:06.322269  522827 type.go:168] "Request Body" body=""
	I1217 20:35:06.322367  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:06.322690  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:06.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:35:06.822331  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:06.822614  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:07.322502  522827 type.go:168] "Request Body" body=""
	I1217 20:35:07.322573  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:07.322817  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:07.822274  522827 type.go:168] "Request Body" body=""
	I1217 20:35:07.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:07.822698  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:07.822760  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:08.322442  522827 type.go:168] "Request Body" body=""
	I1217 20:35:08.322522  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:08.322845  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:08.822222  522827 type.go:168] "Request Body" body=""
	I1217 20:35:08.822296  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:08.822597  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:09.322251  522827 type.go:168] "Request Body" body=""
	I1217 20:35:09.322329  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:09.322650  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:09.822333  522827 type.go:168] "Request Body" body=""
	I1217 20:35:09.822408  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:09.822762  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:09.822817  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:10.322442  522827 type.go:168] "Request Body" body=""
	I1217 20:35:10.322538  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:10.322820  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:10.822275  522827 type.go:168] "Request Body" body=""
	I1217 20:35:10.822347  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:10.822634  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:11.322386  522827 type.go:168] "Request Body" body=""
	I1217 20:35:11.322464  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:11.322764  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:11.822226  522827 type.go:168] "Request Body" body=""
	I1217 20:35:11.822297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:11.822669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:12.322679  522827 type.go:168] "Request Body" body=""
	I1217 20:35:12.322763  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:12.323067  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:12.323113  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:12.822935  522827 type.go:168] "Request Body" body=""
	I1217 20:35:12.823024  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:12.823355  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:13.323128  522827 type.go:168] "Request Body" body=""
	I1217 20:35:13.323210  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:13.323540  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:13.822246  522827 type.go:168] "Request Body" body=""
	I1217 20:35:13.822355  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:13.822636  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:14.322330  522827 type.go:168] "Request Body" body=""
	I1217 20:35:14.322406  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:14.322743  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:14.822304  522827 type.go:168] "Request Body" body=""
	I1217 20:35:14.822372  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:14.822645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:14.822685  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:15.322347  522827 type.go:168] "Request Body" body=""
	I1217 20:35:15.322423  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:15.322800  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:15.822252  522827 type.go:168] "Request Body" body=""
	I1217 20:35:15.822332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:15.822662  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:16.322191  522827 type.go:168] "Request Body" body=""
	I1217 20:35:16.322260  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:16.322568  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:16.822255  522827 type.go:168] "Request Body" body=""
	I1217 20:35:16.822332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:16.822681  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:16.822743  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:17.322820  522827 type.go:168] "Request Body" body=""
	I1217 20:35:17.322896  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:17.323309  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:17.823040  522827 type.go:168] "Request Body" body=""
	I1217 20:35:17.823109  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:17.823374  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:18.323149  522827 type.go:168] "Request Body" body=""
	I1217 20:35:18.323236  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:18.323572  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:18.822287  522827 type.go:168] "Request Body" body=""
	I1217 20:35:18.822373  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:18.822708  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:18.822767  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:19.322441  522827 type.go:168] "Request Body" body=""
	I1217 20:35:19.322515  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:19.322786  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:19.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:35:19.822312  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:19.822602  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:20.322257  522827 type.go:168] "Request Body" body=""
	I1217 20:35:20.322333  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:20.322679  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:20.822345  522827 type.go:168] "Request Body" body=""
	I1217 20:35:20.822415  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:20.822676  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:21.322243  522827 type.go:168] "Request Body" body=""
	I1217 20:35:21.322326  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:21.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:21.322713  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:21.822270  522827 type.go:168] "Request Body" body=""
	I1217 20:35:21.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:21.822667  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:22.322750  522827 type.go:168] "Request Body" body=""
	I1217 20:35:22.322821  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:22.323094  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:22.823051  522827 type.go:168] "Request Body" body=""
	I1217 20:35:22.823129  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:22.823477  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:23.322217  522827 type.go:168] "Request Body" body=""
	I1217 20:35:23.322297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:23.322625  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:23.822303  522827 type.go:168] "Request Body" body=""
	I1217 20:35:23.822373  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:23.822637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:23.822680  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:24.322343  522827 type.go:168] "Request Body" body=""
	I1217 20:35:24.322422  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:24.322779  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:24.822483  522827 type.go:168] "Request Body" body=""
	I1217 20:35:24.822568  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:24.822893  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:25.322199  522827 type.go:168] "Request Body" body=""
	I1217 20:35:25.322274  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:25.322559  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:25.822235  522827 type.go:168] "Request Body" body=""
	I1217 20:35:25.822320  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:25.822638  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:26.322257  522827 type.go:168] "Request Body" body=""
	I1217 20:35:26.322337  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:26.322663  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:26.322718  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:26.822248  522827 type.go:168] "Request Body" body=""
	I1217 20:35:26.822315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:26.822587  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:27.322563  522827 type.go:168] "Request Body" body=""
	I1217 20:35:27.322640  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:27.322979  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:27.822237  522827 type.go:168] "Request Body" body=""
	I1217 20:35:27.822313  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:27.822672  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:28.322358  522827 type.go:168] "Request Body" body=""
	I1217 20:35:28.322427  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:28.322726  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:28.322768  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:28.822428  522827 type.go:168] "Request Body" body=""
	I1217 20:35:28.822502  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:28.822834  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:29.322244  522827 type.go:168] "Request Body" body=""
	I1217 20:35:29.322327  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:29.322664  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:29.822221  522827 type.go:168] "Request Body" body=""
	I1217 20:35:29.822293  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:29.822604  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:30.322244  522827 type.go:168] "Request Body" body=""
	I1217 20:35:30.322319  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:30.322684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:30.822264  522827 type.go:168] "Request Body" body=""
	I1217 20:35:30.822339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:30.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:30.822715  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:31.322385  522827 type.go:168] "Request Body" body=""
	I1217 20:35:31.322460  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:31.322798  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:31.822531  522827 type.go:168] "Request Body" body=""
	I1217 20:35:31.822610  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:31.822946  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:32.322713  522827 type.go:168] "Request Body" body=""
	I1217 20:35:32.322793  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:32.323145  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:32.822950  522827 type.go:168] "Request Body" body=""
	I1217 20:35:32.823025  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:32.823278  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:32.823318  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:33.323110  522827 type.go:168] "Request Body" body=""
	I1217 20:35:33.323192  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:33.323540  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:33.822246  522827 type.go:168] "Request Body" body=""
	I1217 20:35:33.822323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:33.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:34.322218  522827 type.go:168] "Request Body" body=""
	I1217 20:35:34.322300  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:34.322661  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:34.822268  522827 type.go:168] "Request Body" body=""
	I1217 20:35:34.822347  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:34.822680  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:35.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:35:35.322307  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:35.322640  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:35.322702  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:35.822209  522827 type.go:168] "Request Body" body=""
	I1217 20:35:35.822278  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:35.822595  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:36.322264  522827 type.go:168] "Request Body" body=""
	I1217 20:35:36.322343  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:36.322666  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:36.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:35:36.822306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:36.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:37.322496  522827 type.go:168] "Request Body" body=""
	I1217 20:35:37.322571  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:37.322824  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:37.322862  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:37.822509  522827 type.go:168] "Request Body" body=""
	I1217 20:35:37.822586  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:37.822928  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:38.322513  522827 type.go:168] "Request Body" body=""
	I1217 20:35:38.322595  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:38.323137  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:38.822886  522827 type.go:168] "Request Body" body=""
	I1217 20:35:38.822959  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:38.823295  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:39.323106  522827 type.go:168] "Request Body" body=""
	I1217 20:35:39.323188  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:39.323560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:39.323633  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:39.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:35:39.822276  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:39.822619  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:40.322173  522827 type.go:168] "Request Body" body=""
	I1217 20:35:40.322246  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:40.322545  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:40.822242  522827 type.go:168] "Request Body" body=""
	I1217 20:35:40.822317  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:40.822754  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:41.322470  522827 type.go:168] "Request Body" body=""
	I1217 20:35:41.322556  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:41.322901  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:41.822209  522827 type.go:168] "Request Body" body=""
	I1217 20:35:41.822282  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:41.822536  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:41.822583  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:42.322519  522827 type.go:168] "Request Body" body=""
	I1217 20:35:42.322603  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:42.322998  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:42.822247  522827 type.go:168] "Request Body" body=""
	I1217 20:35:42.822336  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:42.822693  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:43.322188  522827 type.go:168] "Request Body" body=""
	I1217 20:35:43.322249  522827 node_ready.go:38] duration metric: took 6m0.000239045s for node "functional-655452" to be "Ready" ...
	I1217 20:35:43.325291  522827 out.go:203] 
	W1217 20:35:43.328188  522827 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 20:35:43.328206  522827 out.go:285] * 
	W1217 20:35:43.330331  522827 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:35:43.333111  522827 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 20:35:52 functional-655452 crio[5447]: time="2025-12-17T20:35:52.248452743Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=8a037fb8-47fe-4682-a06b-c651dbe2b91e name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:53 functional-655452 crio[5447]: time="2025-12-17T20:35:53.342654144Z" level=info msg="Checking image status: minikube-local-cache-test:functional-655452" id=2bd5fa93-b92c-4f8d-a6f9-3bc1f05793ac name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:53 functional-655452 crio[5447]: time="2025-12-17T20:35:53.342823049Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 17 20:35:53 functional-655452 crio[5447]: time="2025-12-17T20:35:53.342864723Z" level=info msg="Image minikube-local-cache-test:functional-655452 not found" id=2bd5fa93-b92c-4f8d-a6f9-3bc1f05793ac name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:53 functional-655452 crio[5447]: time="2025-12-17T20:35:53.342939669Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-655452 found" id=2bd5fa93-b92c-4f8d-a6f9-3bc1f05793ac name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:53 functional-655452 crio[5447]: time="2025-12-17T20:35:53.367286402Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-655452" id=c3e35e06-0010-4a5b-9ef0-d2f451c83286 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:53 functional-655452 crio[5447]: time="2025-12-17T20:35:53.367426539Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-655452 not found" id=c3e35e06-0010-4a5b-9ef0-d2f451c83286 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:53 functional-655452 crio[5447]: time="2025-12-17T20:35:53.367467671Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-655452 found" id=c3e35e06-0010-4a5b-9ef0-d2f451c83286 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:53 functional-655452 crio[5447]: time="2025-12-17T20:35:53.39201649Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-655452" id=a13d7720-bb07-4b8f-9410-0a0d82ddbada name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:53 functional-655452 crio[5447]: time="2025-12-17T20:35:53.392181046Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-655452 not found" id=a13d7720-bb07-4b8f-9410-0a0d82ddbada name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:53 functional-655452 crio[5447]: time="2025-12-17T20:35:53.392245268Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-655452 found" id=a13d7720-bb07-4b8f-9410-0a0d82ddbada name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:54 functional-655452 crio[5447]: time="2025-12-17T20:35:54.358690122Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=e9c9a3c5-f0ec-491f-b467-a4fb566a7e4a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:54 functional-655452 crio[5447]: time="2025-12-17T20:35:54.699662766Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=2f58d9f9-d198-43e4-b155-576924a7469c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:54 functional-655452 crio[5447]: time="2025-12-17T20:35:54.699807563Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=2f58d9f9-d198-43e4-b155-576924a7469c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:54 functional-655452 crio[5447]: time="2025-12-17T20:35:54.699843494Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=2f58d9f9-d198-43e4-b155-576924a7469c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:55 functional-655452 crio[5447]: time="2025-12-17T20:35:55.242462245Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=8ee73752-fcec-461d-ad04-e7b693a40594 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:55 functional-655452 crio[5447]: time="2025-12-17T20:35:55.242603243Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=8ee73752-fcec-461d-ad04-e7b693a40594 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:55 functional-655452 crio[5447]: time="2025-12-17T20:35:55.242639059Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=8ee73752-fcec-461d-ad04-e7b693a40594 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:55 functional-655452 crio[5447]: time="2025-12-17T20:35:55.267407227Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=96ec17f9-40c0-4dcf-9b01-6d9e24b90fd5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:55 functional-655452 crio[5447]: time="2025-12-17T20:35:55.267560402Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=96ec17f9-40c0-4dcf-9b01-6d9e24b90fd5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:55 functional-655452 crio[5447]: time="2025-12-17T20:35:55.26762245Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=96ec17f9-40c0-4dcf-9b01-6d9e24b90fd5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:55 functional-655452 crio[5447]: time="2025-12-17T20:35:55.293587381Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=b9a60d8c-7a33-4a91-bdf0-5e02a9ced5db name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:55 functional-655452 crio[5447]: time="2025-12-17T20:35:55.293749548Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=b9a60d8c-7a33-4a91-bdf0-5e02a9ced5db name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:55 functional-655452 crio[5447]: time="2025-12-17T20:35:55.293805229Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=b9a60d8c-7a33-4a91-bdf0-5e02a9ced5db name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:55 functional-655452 crio[5447]: time="2025-12-17T20:35:55.867866993Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=7b807932-d5c4-4be7-9710-4a58a027c9d7 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:35:57.381799    9477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:35:57.382400    9477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:35:57.383970    9477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:35:57.384434    9477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:35:57.385945    9477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 19:02] hrtimer: interrupt took 71042327 ns
	[Dec17 20:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 20:12] overlayfs: idmapped layers are currently not supported
	[  +0.080288] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec17 20:17] overlayfs: idmapped layers are currently not supported
	[Dec17 20:18] overlayfs: idmapped layers are currently not supported
	[Dec17 20:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:35:57 up  3:18,  0 user,  load average: 0.56, 0.36, 0.91
	Linux functional-655452 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 20:35:54 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:35:55 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1152.
	Dec 17 20:35:55 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:55 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:55 functional-655452 kubelet[9321]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:55 functional-655452 kubelet[9321]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:55 functional-655452 kubelet[9321]: E1217 20:35:55.381027    9321 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:35:55 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:35:55 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:35:56 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1153.
	Dec 17 20:35:56 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:56 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:56 functional-655452 kubelet[9370]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:56 functional-655452 kubelet[9370]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:56 functional-655452 kubelet[9370]: E1217 20:35:56.136252    9370 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:35:56 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:35:56 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:35:56 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1154.
	Dec 17 20:35:56 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:56 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:56 functional-655452 kubelet[9391]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:56 functional-655452 kubelet[9391]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:56 functional-655452 kubelet[9391]: E1217 20:35:56.885247    9391 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:35:56 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:35:56 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452: exit status 2 (366.985567ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-655452" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (2.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (2.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-655452 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-655452 get pods: exit status 1 (111.879557ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-655452 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-655452
helpers_test.go:244: (dbg) docker inspect functional-655452:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	        "Created": "2025-12-17T20:21:23.127111799Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 517339,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:21:23.205943598Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hosts",
	        "LogPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f-json.log",
	        "Name": "/functional-655452",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-655452:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-655452",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	                "LowerDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8-init/diff:/var/lib/docker/overlay2/c4759b344ecb109c83d66e4ef56c76903f1d1e597efb86ab2d2c7911b1130a8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-655452",
	                "Source": "/var/lib/docker/volumes/functional-655452/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-655452",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-655452",
	                "name.minikube.sigs.k8s.io": "functional-655452",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "77ae7dbcf69b3201be4ce65a88d235dbad078142318e01a5939425f9766fb924",
	            "SandboxKey": "/var/run/docker/netns/77ae7dbcf69b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-655452": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:c6:98:b5:58:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b68cd6e69ccdb81b0870dd976e2d5401ca696480842d2c469293121d59435cfb",
	                    "EndpointID": "82926bb4b08b5f66131c1026a8bc72a582e9cb8c6d97a278a494965923f47cb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-655452",
	                        "ce7a6a54d14f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452: exit status 2 (308.951796ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-655452 logs -n 25: (1.092587407s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-643319 image ls --format short --alsologtostderr                                                                                     │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image   │ functional-643319 image ls --format yaml --alsologtostderr                                                                                      │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ ssh     │ functional-643319 ssh pgrep buildkitd                                                                                                           │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ image   │ functional-643319 image build -t localhost/my-image:functional-643319 testdata/build --alsologtostderr                                          │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image   │ functional-643319 image ls --format json --alsologtostderr                                                                                      │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image   │ functional-643319 image ls --format table --alsologtostderr                                                                                     │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image   │ functional-643319 image ls                                                                                                                      │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ delete  │ -p functional-643319                                                                                                                            │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ start   │ -p functional-655452 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ start   │ -p functional-655452 --alsologtostderr -v=8                                                                                                     │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:29 UTC │                     │
	│ cache   │ functional-655452 cache add registry.k8s.io/pause:3.1                                                                                           │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ functional-655452 cache add registry.k8s.io/pause:3.3                                                                                           │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ functional-655452 cache add registry.k8s.io/pause:latest                                                                                        │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ functional-655452 cache add minikube-local-cache-test:functional-655452                                                                         │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ functional-655452 cache delete minikube-local-cache-test:functional-655452                                                                      │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ list                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ ssh     │ functional-655452 ssh sudo crictl images                                                                                                        │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ ssh     │ functional-655452 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                              │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ ssh     │ functional-655452 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │                     │
	│ cache   │ functional-655452 cache reload                                                                                                                  │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ ssh     │ functional-655452 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ kubectl │ functional-655452 kubectl -- --context functional-655452 get pods                                                                               │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:29:37
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:29:37.230217  522827 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:29:37.230338  522827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:29:37.230348  522827 out.go:374] Setting ErrFile to fd 2...
	I1217 20:29:37.230354  522827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:29:37.230641  522827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:29:37.231040  522827 out.go:368] Setting JSON to false
	I1217 20:29:37.231956  522827 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11527,"bootTime":1765991851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:29:37.232033  522827 start.go:143] virtualization:  
	I1217 20:29:37.235360  522827 out.go:179] * [functional-655452] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:29:37.239166  522827 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:29:37.239533  522827 notify.go:221] Checking for updates...
	I1217 20:29:37.245507  522827 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:29:37.248369  522827 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:37.251209  522827 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:29:37.254179  522827 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:29:37.257129  522827 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:29:37.260562  522827 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:29:37.260726  522827 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:29:37.289208  522827 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:29:37.289391  522827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:29:37.344995  522827 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 20:29:37.33566048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:29:37.345107  522827 docker.go:319] overlay module found
	I1217 20:29:37.348246  522827 out.go:179] * Using the docker driver based on existing profile
	I1217 20:29:37.351193  522827 start.go:309] selected driver: docker
	I1217 20:29:37.351220  522827 start.go:927] validating driver "docker" against &{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:29:37.351378  522827 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:29:37.351479  522827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:29:37.406404  522827 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 20:29:37.397152083 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:29:37.406839  522827 cni.go:84] Creating CNI manager for ""
	I1217 20:29:37.406903  522827 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:29:37.406958  522827 start.go:353] cluster config:
	{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:29:37.410074  522827 out.go:179] * Starting "functional-655452" primary control-plane node in "functional-655452" cluster
	I1217 20:29:37.413044  522827 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:29:37.415960  522827 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:29:37.418922  522827 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:29:37.418997  522827 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1217 20:29:37.419012  522827 cache.go:65] Caching tarball of preloaded images
	I1217 20:29:37.419028  522827 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:29:37.419099  522827 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:29:37.419110  522827 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 20:29:37.419218  522827 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/config.json ...
	I1217 20:29:37.438883  522827 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:29:37.438908  522827 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:29:37.438929  522827 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:29:37.438964  522827 start.go:360] acquireMachinesLock for functional-655452: {Name:mk8b480106227149e1b79da3c4f580b9dc5e0f96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:29:37.439024  522827 start.go:364] duration metric: took 37.399µs to acquireMachinesLock for "functional-655452"
	I1217 20:29:37.439047  522827 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:29:37.439057  522827 fix.go:54] fixHost starting: 
	I1217 20:29:37.439341  522827 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:29:37.456072  522827 fix.go:112] recreateIfNeeded on functional-655452: state=Running err=<nil>
	W1217 20:29:37.456113  522827 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:29:37.459179  522827 out.go:252] * Updating the running docker "functional-655452" container ...
	I1217 20:29:37.459210  522827 machine.go:94] provisionDockerMachine start ...
	I1217 20:29:37.459290  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:37.476101  522827 main.go:143] libmachine: Using SSH client type: native
	I1217 20:29:37.476449  522827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:29:37.476466  522827 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:29:37.607148  522827 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-655452
	
	I1217 20:29:37.607176  522827 ubuntu.go:182] provisioning hostname "functional-655452"
	I1217 20:29:37.607253  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:37.625523  522827 main.go:143] libmachine: Using SSH client type: native
	I1217 20:29:37.625850  522827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:29:37.625869  522827 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-655452 && echo "functional-655452" | sudo tee /etc/hostname
	I1217 20:29:37.765012  522827 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-655452
	
	I1217 20:29:37.765095  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:37.783574  522827 main.go:143] libmachine: Using SSH client type: native
	I1217 20:29:37.784233  522827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:29:37.784256  522827 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-655452' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-655452/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-655452' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:29:37.923858  522827 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:29:37.923885  522827 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:29:37.923918  522827 ubuntu.go:190] setting up certificates
	I1217 20:29:37.923930  522827 provision.go:84] configureAuth start
	I1217 20:29:37.923995  522827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-655452
	I1217 20:29:37.942198  522827 provision.go:143] copyHostCerts
	I1217 20:29:37.942245  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:29:37.942294  522827 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:29:37.942308  522827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:29:37.942385  522827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:29:37.942483  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:29:37.942506  522827 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:29:37.942510  522827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:29:37.942538  522827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:29:37.942584  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:29:37.942605  522827 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:29:37.942613  522827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:29:37.942638  522827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:29:37.942696  522827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.functional-655452 san=[127.0.0.1 192.168.49.2 functional-655452 localhost minikube]
	I1217 20:29:38.205373  522827 provision.go:177] copyRemoteCerts
	I1217 20:29:38.205444  522827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:29:38.205488  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:38.222940  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:38.324557  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 20:29:38.324643  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:29:38.342369  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 20:29:38.342442  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 20:29:38.361702  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 20:29:38.361816  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:29:38.379229  522827 provision.go:87] duration metric: took 455.281269ms to configureAuth
	I1217 20:29:38.379306  522827 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:29:38.379506  522827 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:29:38.379650  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:38.397098  522827 main.go:143] libmachine: Using SSH client type: native
	I1217 20:29:38.397425  522827 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:29:38.397449  522827 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:29:38.710104  522827 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:29:38.710129  522827 machine.go:97] duration metric: took 1.250909554s to provisionDockerMachine
	I1217 20:29:38.710141  522827 start.go:293] postStartSetup for "functional-655452" (driver="docker")
	I1217 20:29:38.710173  522827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:29:38.710243  522827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:29:38.710290  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:38.729105  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:38.823561  522827 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:29:38.826921  522827 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 20:29:38.826944  522827 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 20:29:38.826949  522827 command_runner.go:130] > VERSION_ID="12"
	I1217 20:29:38.826954  522827 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 20:29:38.826958  522827 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 20:29:38.826962  522827 command_runner.go:130] > ID=debian
	I1217 20:29:38.826966  522827 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 20:29:38.826971  522827 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 20:29:38.826976  522827 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 20:29:38.827033  522827 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:29:38.827056  522827 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:29:38.827068  522827 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:29:38.827127  522827 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:29:38.827213  522827 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:29:38.827224  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /etc/ssl/certs/4884122.pem
	I1217 20:29:38.827310  522827 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts -> hosts in /etc/test/nested/copy/488412
	I1217 20:29:38.827318  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts -> /etc/test/nested/copy/488412/hosts
	I1217 20:29:38.827361  522827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/488412
	I1217 20:29:38.835073  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:29:38.853051  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts --> /etc/test/nested/copy/488412/hosts (40 bytes)
	I1217 20:29:38.870277  522827 start.go:296] duration metric: took 160.119138ms for postStartSetup
	I1217 20:29:38.870416  522827 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:29:38.870497  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:38.887313  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:38.980667  522827 command_runner.go:130] > 14%
	I1217 20:29:38.980748  522827 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:29:38.985147  522827 command_runner.go:130] > 169G
	I1217 20:29:38.985687  522827 fix.go:56] duration metric: took 1.546626529s for fixHost
	I1217 20:29:38.985712  522827 start.go:83] releasing machines lock for "functional-655452", held for 1.546675825s
	I1217 20:29:38.985789  522827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-655452
	I1217 20:29:39.004882  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:29:39.004958  522827 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:29:39.004969  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:29:39.005005  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:29:39.005049  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:29:39.005073  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:29:39.005126  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:29:39.005177  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.005197  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.005217  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.005238  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:29:39.005294  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:39.023309  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:39.128919  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:29:39.146238  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:29:39.163663  522827 ssh_runner.go:195] Run: openssl version
	I1217 20:29:39.169395  522827 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 20:29:39.169821  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.177042  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:29:39.184227  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.187671  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.187835  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.187899  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:29:39.232645  522827 command_runner.go:130] > 51391683
	I1217 20:29:39.233156  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:29:39.240764  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.248070  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:29:39.256139  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.260468  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.260613  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.260717  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:29:39.301324  522827 command_runner.go:130] > 3ec20f2e
	I1217 20:29:39.301774  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:29:39.309564  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.316908  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:29:39.330430  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.334931  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.335647  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.335725  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:39.377554  522827 command_runner.go:130] > b5213941
	I1217 20:29:39.378955  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:29:39.389619  522827 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:29:39.393257  522827 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 20:29:39.396841  522827 ssh_runner.go:195] Run: cat /version.json
	I1217 20:29:39.396923  522827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:29:39.487006  522827 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1217 20:29:39.489563  522827 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 20:29:39.489734  522827 ssh_runner.go:195] Run: systemctl --version
	I1217 20:29:39.495686  522827 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 20:29:39.495789  522827 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 20:29:39.496199  522827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:29:39.531768  522827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 20:29:39.536045  522827 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 20:29:39.536498  522827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:29:39.536609  522827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:29:39.544584  522827 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:29:39.544609  522827 start.go:496] detecting cgroup driver to use...
	I1217 20:29:39.544639  522827 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:29:39.544686  522827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:29:39.559677  522827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:29:39.572537  522827 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:29:39.572629  522827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:29:39.588063  522827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:29:39.601417  522827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:29:39.711338  522827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:29:39.828534  522827 docker.go:234] disabling docker service ...
	I1217 20:29:39.828602  522827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:29:39.843450  522827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:29:39.856661  522827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:29:39.988443  522827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:29:40.133139  522827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:29:40.147217  522827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:29:40.161697  522827 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1217 20:29:40.163096  522827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:29:40.163182  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.173178  522827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:29:40.173338  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.182803  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.192168  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.201463  522827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:29:40.209602  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.218600  522827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.227088  522827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.236327  522827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:29:40.243154  522827 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 20:29:40.244193  522827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:29:40.251635  522827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:29:40.361488  522827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:29:40.546740  522827 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:29:40.546847  522827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:29:40.551021  522827 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1217 20:29:40.551089  522827 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 20:29:40.551102  522827 command_runner.go:130] > Device: 0,72	Inode: 1636        Links: 1
	I1217 20:29:40.551127  522827 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 20:29:40.551137  522827 command_runner.go:130] > Access: 2025-12-17 20:29:40.478294705 +0000
	I1217 20:29:40.551143  522827 command_runner.go:130] > Modify: 2025-12-17 20:29:40.478294705 +0000
	I1217 20:29:40.551149  522827 command_runner.go:130] > Change: 2025-12-17 20:29:40.478294705 +0000
	I1217 20:29:40.551152  522827 command_runner.go:130] >  Birth: -
	I1217 20:29:40.551189  522827 start.go:564] Will wait 60s for crictl version
	I1217 20:29:40.551247  522827 ssh_runner.go:195] Run: which crictl
	I1217 20:29:40.554786  522827 command_runner.go:130] > /usr/local/bin/crictl
	I1217 20:29:40.554923  522827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:29:40.577444  522827 command_runner.go:130] > Version:  0.1.0
	I1217 20:29:40.577470  522827 command_runner.go:130] > RuntimeName:  cri-o
	I1217 20:29:40.577476  522827 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1217 20:29:40.577491  522827 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 20:29:40.579694  522827 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:29:40.579819  522827 ssh_runner.go:195] Run: crio --version
	I1217 20:29:40.609324  522827 command_runner.go:130] > crio version 1.34.3
	I1217 20:29:40.609350  522827 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1217 20:29:40.609357  522827 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1217 20:29:40.609362  522827 command_runner.go:130] >    GitTreeState:   dirty
	I1217 20:29:40.609367  522827 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1217 20:29:40.609371  522827 command_runner.go:130] >    GoVersion:      go1.24.6
	I1217 20:29:40.609375  522827 command_runner.go:130] >    Compiler:       gc
	I1217 20:29:40.609382  522827 command_runner.go:130] >    Platform:       linux/arm64
	I1217 20:29:40.609386  522827 command_runner.go:130] >    Linkmode:       static
	I1217 20:29:40.609390  522827 command_runner.go:130] >    BuildTags:
	I1217 20:29:40.609393  522827 command_runner.go:130] >      static
	I1217 20:29:40.609397  522827 command_runner.go:130] >      netgo
	I1217 20:29:40.609401  522827 command_runner.go:130] >      osusergo
	I1217 20:29:40.609410  522827 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1217 20:29:40.609414  522827 command_runner.go:130] >      seccomp
	I1217 20:29:40.609421  522827 command_runner.go:130] >      apparmor
	I1217 20:29:40.609424  522827 command_runner.go:130] >      selinux
	I1217 20:29:40.609429  522827 command_runner.go:130] >    LDFlags:          unknown
	I1217 20:29:40.609433  522827 command_runner.go:130] >    SeccompEnabled:   true
	I1217 20:29:40.609441  522827 command_runner.go:130] >    AppArmorEnabled:  false
	I1217 20:29:40.609527  522827 ssh_runner.go:195] Run: crio --version
	I1217 20:29:40.638467  522827 command_runner.go:130] > crio version 1.34.3
	I1217 20:29:40.638491  522827 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1217 20:29:40.638499  522827 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1217 20:29:40.638505  522827 command_runner.go:130] >    GitTreeState:   dirty
	I1217 20:29:40.638509  522827 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1217 20:29:40.638516  522827 command_runner.go:130] >    GoVersion:      go1.24.6
	I1217 20:29:40.638520  522827 command_runner.go:130] >    Compiler:       gc
	I1217 20:29:40.638533  522827 command_runner.go:130] >    Platform:       linux/arm64
	I1217 20:29:40.638543  522827 command_runner.go:130] >    Linkmode:       static
	I1217 20:29:40.638547  522827 command_runner.go:130] >    BuildTags:
	I1217 20:29:40.638550  522827 command_runner.go:130] >      static
	I1217 20:29:40.638554  522827 command_runner.go:130] >      netgo
	I1217 20:29:40.638558  522827 command_runner.go:130] >      osusergo
	I1217 20:29:40.638568  522827 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1217 20:29:40.638572  522827 command_runner.go:130] >      seccomp
	I1217 20:29:40.638576  522827 command_runner.go:130] >      apparmor
	I1217 20:29:40.638583  522827 command_runner.go:130] >      selinux
	I1217 20:29:40.638587  522827 command_runner.go:130] >    LDFlags:          unknown
	I1217 20:29:40.638592  522827 command_runner.go:130] >    SeccompEnabled:   true
	I1217 20:29:40.638604  522827 command_runner.go:130] >    AppArmorEnabled:  false
	I1217 20:29:40.644077  522827 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 20:29:40.647046  522827 cli_runner.go:164] Run: docker network inspect functional-655452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:29:40.665190  522827 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:29:40.669398  522827 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1217 20:29:40.669593  522827 kubeadm.go:884] updating cluster {Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:29:40.669700  522827 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:29:40.669779  522827 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:29:40.704282  522827 command_runner.go:130] > {
	I1217 20:29:40.704302  522827 command_runner.go:130] >   "images":  [
	I1217 20:29:40.704307  522827 command_runner.go:130] >     {
	I1217 20:29:40.704316  522827 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 20:29:40.704321  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704328  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 20:29:40.704331  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704335  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704350  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1217 20:29:40.704362  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1217 20:29:40.704370  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704374  522827 command_runner.go:130] >       "size":  "111333938",
	I1217 20:29:40.704379  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704389  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704403  522827 command_runner.go:130] >     },
	I1217 20:29:40.704406  522827 command_runner.go:130] >     {
	I1217 20:29:40.704413  522827 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 20:29:40.704419  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704425  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 20:29:40.704429  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704433  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704445  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1217 20:29:40.704454  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 20:29:40.704460  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704464  522827 command_runner.go:130] >       "size":  "29037500",
	I1217 20:29:40.704468  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704476  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704482  522827 command_runner.go:130] >     },
	I1217 20:29:40.704485  522827 command_runner.go:130] >     {
	I1217 20:29:40.704494  522827 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 20:29:40.704503  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704509  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 20:29:40.704512  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704516  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704528  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1217 20:29:40.704536  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1217 20:29:40.704542  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704547  522827 command_runner.go:130] >       "size":  "74491780",
	I1217 20:29:40.704551  522827 command_runner.go:130] >       "username":  "nonroot",
	I1217 20:29:40.704556  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704561  522827 command_runner.go:130] >     },
	I1217 20:29:40.704568  522827 command_runner.go:130] >     {
	I1217 20:29:40.704579  522827 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1217 20:29:40.704583  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704588  522827 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1217 20:29:40.704594  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704598  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704605  522827 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890",
	I1217 20:29:40.704613  522827 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"
	I1217 20:29:40.704619  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704623  522827 command_runner.go:130] >       "size":  "60850387",
	I1217 20:29:40.704626  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.704630  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.704636  522827 command_runner.go:130] >       },
	I1217 20:29:40.704645  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704657  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704660  522827 command_runner.go:130] >     },
	I1217 20:29:40.704664  522827 command_runner.go:130] >     {
	I1217 20:29:40.704673  522827 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1217 20:29:40.704679  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704685  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1217 20:29:40.704689  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704693  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704704  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee",
	I1217 20:29:40.704721  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"
	I1217 20:29:40.704724  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704729  522827 command_runner.go:130] >       "size":  "85015535",
	I1217 20:29:40.704735  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.704739  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.704742  522827 command_runner.go:130] >       },
	I1217 20:29:40.704746  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704753  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704756  522827 command_runner.go:130] >     },
	I1217 20:29:40.704759  522827 command_runner.go:130] >     {
	I1217 20:29:40.704772  522827 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1217 20:29:40.704779  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704785  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1217 20:29:40.704788  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704793  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704803  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f",
	I1217 20:29:40.704813  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1217 20:29:40.704822  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704827  522827 command_runner.go:130] >       "size":  "72170325",
	I1217 20:29:40.704831  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.704835  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.704838  522827 command_runner.go:130] >       },
	I1217 20:29:40.704842  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704846  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704848  522827 command_runner.go:130] >     },
	I1217 20:29:40.704851  522827 command_runner.go:130] >     {
	I1217 20:29:40.704858  522827 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1217 20:29:40.704861  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704866  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1217 20:29:40.704870  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704875  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704883  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f",
	I1217 20:29:40.704894  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1217 20:29:40.704898  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704903  522827 command_runner.go:130] >       "size":  "74107287",
	I1217 20:29:40.704910  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.704914  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.704926  522827 command_runner.go:130] >     },
	I1217 20:29:40.704930  522827 command_runner.go:130] >     {
	I1217 20:29:40.704936  522827 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1217 20:29:40.704940  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.704946  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1217 20:29:40.704949  522827 command_runner.go:130] >       ],
	I1217 20:29:40.704963  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.704975  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3",
	I1217 20:29:40.704993  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"
	I1217 20:29:40.705000  522827 command_runner.go:130] >       ],
	I1217 20:29:40.705005  522827 command_runner.go:130] >       "size":  "49822549",
	I1217 20:29:40.705008  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.705014  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.705017  522827 command_runner.go:130] >       },
	I1217 20:29:40.705025  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.705029  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.705033  522827 command_runner.go:130] >     },
	I1217 20:29:40.705036  522827 command_runner.go:130] >     {
	I1217 20:29:40.705043  522827 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 20:29:40.705055  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.705060  522827 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 20:29:40.705063  522827 command_runner.go:130] >       ],
	I1217 20:29:40.705068  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.705078  522827 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1217 20:29:40.705089  522827 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1217 20:29:40.705094  522827 command_runner.go:130] >       ],
	I1217 20:29:40.705097  522827 command_runner.go:130] >       "size":  "519884",
	I1217 20:29:40.705101  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.705108  522827 command_runner.go:130] >         "value":  "65535"
	I1217 20:29:40.705111  522827 command_runner.go:130] >       },
	I1217 20:29:40.705115  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.705119  522827 command_runner.go:130] >       "pinned":  true
	I1217 20:29:40.705128  522827 command_runner.go:130] >     }
	I1217 20:29:40.705133  522827 command_runner.go:130] >   ]
	I1217 20:29:40.705136  522827 command_runner.go:130] > }
	I1217 20:29:40.705310  522827 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:29:40.705323  522827 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:29:40.705384  522827 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:29:40.728606  522827 command_runner.go:130] > {
	I1217 20:29:40.728624  522827 command_runner.go:130] >   "images":  [
	I1217 20:29:40.728629  522827 command_runner.go:130] >     {
	I1217 20:29:40.728638  522827 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 20:29:40.728643  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728657  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 20:29:40.728665  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728669  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728678  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1217 20:29:40.728686  522827 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1217 20:29:40.728689  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728694  522827 command_runner.go:130] >       "size":  "111333938",
	I1217 20:29:40.728698  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.728705  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728708  522827 command_runner.go:130] >     },
	I1217 20:29:40.728711  522827 command_runner.go:130] >     {
	I1217 20:29:40.728718  522827 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 20:29:40.728726  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728731  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 20:29:40.728735  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728739  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728747  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1217 20:29:40.728756  522827 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 20:29:40.728759  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728763  522827 command_runner.go:130] >       "size":  "29037500",
	I1217 20:29:40.728767  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.728774  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728778  522827 command_runner.go:130] >     },
	I1217 20:29:40.728781  522827 command_runner.go:130] >     {
	I1217 20:29:40.728789  522827 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 20:29:40.728793  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728798  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 20:29:40.728801  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728805  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728813  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1217 20:29:40.728821  522827 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1217 20:29:40.728824  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728829  522827 command_runner.go:130] >       "size":  "74491780",
	I1217 20:29:40.728833  522827 command_runner.go:130] >       "username":  "nonroot",
	I1217 20:29:40.728840  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728843  522827 command_runner.go:130] >     },
	I1217 20:29:40.728846  522827 command_runner.go:130] >     {
	I1217 20:29:40.728853  522827 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1217 20:29:40.728857  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728862  522827 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1217 20:29:40.728866  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728870  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728877  522827 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890",
	I1217 20:29:40.728887  522827 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"
	I1217 20:29:40.728890  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728894  522827 command_runner.go:130] >       "size":  "60850387",
	I1217 20:29:40.728898  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.728902  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.728904  522827 command_runner.go:130] >       },
	I1217 20:29:40.728913  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.728917  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728920  522827 command_runner.go:130] >     },
	I1217 20:29:40.728924  522827 command_runner.go:130] >     {
	I1217 20:29:40.728930  522827 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1217 20:29:40.728934  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.728939  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1217 20:29:40.728943  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728946  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.728954  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee",
	I1217 20:29:40.728962  522827 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"
	I1217 20:29:40.728965  522827 command_runner.go:130] >       ],
	I1217 20:29:40.728969  522827 command_runner.go:130] >       "size":  "85015535",
	I1217 20:29:40.728972  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.728976  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.728979  522827 command_runner.go:130] >       },
	I1217 20:29:40.728983  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.728986  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.728996  522827 command_runner.go:130] >     },
	I1217 20:29:40.728999  522827 command_runner.go:130] >     {
	I1217 20:29:40.729006  522827 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1217 20:29:40.729009  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.729015  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1217 20:29:40.729018  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729022  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.729031  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f",
	I1217 20:29:40.729039  522827 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1217 20:29:40.729042  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729046  522827 command_runner.go:130] >       "size":  "72170325",
	I1217 20:29:40.729049  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.729053  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.729056  522827 command_runner.go:130] >       },
	I1217 20:29:40.729060  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.729064  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.729067  522827 command_runner.go:130] >     },
	I1217 20:29:40.729070  522827 command_runner.go:130] >     {
	I1217 20:29:40.729076  522827 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1217 20:29:40.729081  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.729086  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1217 20:29:40.729089  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729093  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.729100  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f",
	I1217 20:29:40.729108  522827 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1217 20:29:40.729111  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729115  522827 command_runner.go:130] >       "size":  "74107287",
	I1217 20:29:40.729119  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.729123  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.729125  522827 command_runner.go:130] >     },
	I1217 20:29:40.729128  522827 command_runner.go:130] >     {
	I1217 20:29:40.729135  522827 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1217 20:29:40.729138  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.729147  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1217 20:29:40.729150  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729154  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.729163  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3",
	I1217 20:29:40.729180  522827 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"
	I1217 20:29:40.729183  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729187  522827 command_runner.go:130] >       "size":  "49822549",
	I1217 20:29:40.729191  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.729195  522827 command_runner.go:130] >         "value":  "0"
	I1217 20:29:40.729198  522827 command_runner.go:130] >       },
	I1217 20:29:40.729202  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.729205  522827 command_runner.go:130] >       "pinned":  false
	I1217 20:29:40.729208  522827 command_runner.go:130] >     },
	I1217 20:29:40.729212  522827 command_runner.go:130] >     {
	I1217 20:29:40.729218  522827 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 20:29:40.729221  522827 command_runner.go:130] >       "repoTags":  [
	I1217 20:29:40.729225  522827 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 20:29:40.729228  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729232  522827 command_runner.go:130] >       "repoDigests":  [
	I1217 20:29:40.729239  522827 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1217 20:29:40.729246  522827 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1217 20:29:40.729249  522827 command_runner.go:130] >       ],
	I1217 20:29:40.729253  522827 command_runner.go:130] >       "size":  "519884",
	I1217 20:29:40.729256  522827 command_runner.go:130] >       "uid":  {
	I1217 20:29:40.729260  522827 command_runner.go:130] >         "value":  "65535"
	I1217 20:29:40.729263  522827 command_runner.go:130] >       },
	I1217 20:29:40.729267  522827 command_runner.go:130] >       "username":  "",
	I1217 20:29:40.729271  522827 command_runner.go:130] >       "pinned":  true
	I1217 20:29:40.729274  522827 command_runner.go:130] >     }
	I1217 20:29:40.729276  522827 command_runner.go:130] >   ]
	I1217 20:29:40.729279  522827 command_runner.go:130] > }
	I1217 20:29:40.730532  522827 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:29:40.730563  522827 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:29:40.730572  522827 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1217 20:29:40.730679  522827 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-655452 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:29:40.730767  522827 ssh_runner.go:195] Run: crio config
	I1217 20:29:40.759067  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.758680307Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1217 20:29:40.759091  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.758877363Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1217 20:29:40.759355  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.759160664Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1217 20:29:40.759513  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.75929148Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1217 20:29:40.759764  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.759610703Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:29:40.760178  522827 command_runner.go:130] ! time="2025-12-17T20:29:40.759978034Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1217 20:29:40.781892  522827 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1217 20:29:40.789853  522827 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1217 20:29:40.789886  522827 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1217 20:29:40.789894  522827 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1217 20:29:40.789897  522827 command_runner.go:130] > #
	I1217 20:29:40.789905  522827 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1217 20:29:40.789911  522827 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1217 20:29:40.789918  522827 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1217 20:29:40.789927  522827 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1217 20:29:40.789931  522827 command_runner.go:130] > # reload'.
	I1217 20:29:40.789938  522827 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1217 20:29:40.789949  522827 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1217 20:29:40.789959  522827 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1217 20:29:40.789965  522827 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1217 20:29:40.789972  522827 command_runner.go:130] > [crio]
	I1217 20:29:40.789978  522827 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1217 20:29:40.789983  522827 command_runner.go:130] > # containers images, in this directory.
	I1217 20:29:40.789993  522827 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1217 20:29:40.790003  522827 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1217 20:29:40.790008  522827 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1217 20:29:40.790017  522827 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1217 20:29:40.790024  522827 command_runner.go:130] > # imagestore = ""
	I1217 20:29:40.790038  522827 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1217 20:29:40.790048  522827 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1217 20:29:40.790053  522827 command_runner.go:130] > # storage_driver = "overlay"
	I1217 20:29:40.790058  522827 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1217 20:29:40.790065  522827 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1217 20:29:40.790069  522827 command_runner.go:130] > # storage_option = [
	I1217 20:29:40.790073  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790079  522827 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1217 20:29:40.790092  522827 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1217 20:29:40.790100  522827 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1217 20:29:40.790106  522827 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1217 20:29:40.790112  522827 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1217 20:29:40.790119  522827 command_runner.go:130] > # always happen on a node reboot
	I1217 20:29:40.790124  522827 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1217 20:29:40.790139  522827 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1217 20:29:40.790152  522827 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1217 20:29:40.790158  522827 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1217 20:29:40.790162  522827 command_runner.go:130] > # version_file_persist = ""
	I1217 20:29:40.790170  522827 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1217 20:29:40.790180  522827 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1217 20:29:40.790184  522827 command_runner.go:130] > # internal_wipe = true
	I1217 20:29:40.790193  522827 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1217 20:29:40.790202  522827 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1217 20:29:40.790206  522827 command_runner.go:130] > # internal_repair = true
	I1217 20:29:40.790211  522827 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1217 20:29:40.790219  522827 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1217 20:29:40.790226  522827 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1217 20:29:40.790232  522827 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1217 20:29:40.790241  522827 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1217 20:29:40.790251  522827 command_runner.go:130] > [crio.api]
	I1217 20:29:40.790257  522827 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1217 20:29:40.790262  522827 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1217 20:29:40.790271  522827 command_runner.go:130] > # IP address on which the stream server will listen.
	I1217 20:29:40.790278  522827 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1217 20:29:40.790285  522827 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1217 20:29:40.790290  522827 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1217 20:29:40.790297  522827 command_runner.go:130] > # stream_port = "0"
	I1217 20:29:40.790302  522827 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1217 20:29:40.790307  522827 command_runner.go:130] > # stream_enable_tls = false
	I1217 20:29:40.790313  522827 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1217 20:29:40.790320  522827 command_runner.go:130] > # stream_idle_timeout = ""
	I1217 20:29:40.790330  522827 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1217 20:29:40.790339  522827 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1217 20:29:40.790343  522827 command_runner.go:130] > # stream_tls_cert = ""
	I1217 20:29:40.790349  522827 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1217 20:29:40.790357  522827 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1217 20:29:40.790361  522827 command_runner.go:130] > # stream_tls_key = ""
	I1217 20:29:40.790367  522827 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1217 20:29:40.790377  522827 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1217 20:29:40.790382  522827 command_runner.go:130] > # automatically pick up the changes.
	I1217 20:29:40.790385  522827 command_runner.go:130] > # stream_tls_ca = ""
	I1217 20:29:40.790402  522827 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1217 20:29:40.790415  522827 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1217 20:29:40.790423  522827 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1217 20:29:40.790428  522827 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1217 20:29:40.790437  522827 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1217 20:29:40.790443  522827 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1217 20:29:40.790447  522827 command_runner.go:130] > [crio.runtime]
	I1217 20:29:40.790455  522827 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1217 20:29:40.790465  522827 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1217 20:29:40.790470  522827 command_runner.go:130] > # "nofile=1024:2048"
	I1217 20:29:40.790476  522827 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1217 20:29:40.790480  522827 command_runner.go:130] > # default_ulimits = [
	I1217 20:29:40.790486  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790493  522827 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1217 20:29:40.790499  522827 command_runner.go:130] > # no_pivot = false
	I1217 20:29:40.790505  522827 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1217 20:29:40.790511  522827 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1217 20:29:40.790518  522827 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1217 20:29:40.790525  522827 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1217 20:29:40.790530  522827 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1217 20:29:40.790539  522827 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1217 20:29:40.790543  522827 command_runner.go:130] > # conmon = ""
	I1217 20:29:40.790547  522827 command_runner.go:130] > # Cgroup setting for conmon
	I1217 20:29:40.790558  522827 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1217 20:29:40.790563  522827 command_runner.go:130] > conmon_cgroup = "pod"
	I1217 20:29:40.790572  522827 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1217 20:29:40.790585  522827 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1217 20:29:40.790592  522827 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1217 20:29:40.790603  522827 command_runner.go:130] > # conmon_env = [
	I1217 20:29:40.790606  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790611  522827 command_runner.go:130] > # Additional environment variables to set for all the
	I1217 20:29:40.790621  522827 command_runner.go:130] > # containers. These are overridden if set in the
	I1217 20:29:40.790627  522827 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1217 20:29:40.790631  522827 command_runner.go:130] > # default_env = [
	I1217 20:29:40.790634  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790639  522827 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1217 20:29:40.790647  522827 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1217 20:29:40.790653  522827 command_runner.go:130] > # selinux = false
	I1217 20:29:40.790660  522827 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1217 20:29:40.790675  522827 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1217 20:29:40.790682  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.790691  522827 command_runner.go:130] > # seccomp_profile = ""
	I1217 20:29:40.790698  522827 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1217 20:29:40.790703  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.790707  522827 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1217 20:29:40.790717  522827 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1217 20:29:40.790723  522827 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1217 20:29:40.790730  522827 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1217 20:29:40.790738  522827 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1217 20:29:40.790744  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.790751  522827 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1217 20:29:40.790757  522827 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1217 20:29:40.790761  522827 command_runner.go:130] > # the cgroup blockio controller.
	I1217 20:29:40.790765  522827 command_runner.go:130] > # blockio_config_file = ""
	I1217 20:29:40.790774  522827 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1217 20:29:40.790780  522827 command_runner.go:130] > # blockio parameters.
	I1217 20:29:40.790790  522827 command_runner.go:130] > # blockio_reload = false
	I1217 20:29:40.790796  522827 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1217 20:29:40.790800  522827 command_runner.go:130] > # irqbalance daemon.
	I1217 20:29:40.790805  522827 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1217 20:29:40.790814  522827 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1217 20:29:40.790828  522827 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1217 20:29:40.790836  522827 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1217 20:29:40.790845  522827 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1217 20:29:40.790852  522827 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1217 20:29:40.790859  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.790863  522827 command_runner.go:130] > # rdt_config_file = ""
	I1217 20:29:40.790869  522827 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1217 20:29:40.790873  522827 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1217 20:29:40.790881  522827 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1217 20:29:40.790885  522827 command_runner.go:130] > # separate_pull_cgroup = ""
	I1217 20:29:40.790892  522827 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1217 20:29:40.790900  522827 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1217 20:29:40.790904  522827 command_runner.go:130] > # will be added.
	I1217 20:29:40.790908  522827 command_runner.go:130] > # default_capabilities = [
	I1217 20:29:40.790920  522827 command_runner.go:130] > # 	"CHOWN",
	I1217 20:29:40.790924  522827 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1217 20:29:40.790927  522827 command_runner.go:130] > # 	"FSETID",
	I1217 20:29:40.790930  522827 command_runner.go:130] > # 	"FOWNER",
	I1217 20:29:40.790940  522827 command_runner.go:130] > # 	"SETGID",
	I1217 20:29:40.790944  522827 command_runner.go:130] > # 	"SETUID",
	I1217 20:29:40.790963  522827 command_runner.go:130] > # 	"SETPCAP",
	I1217 20:29:40.790971  522827 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1217 20:29:40.790975  522827 command_runner.go:130] > # 	"KILL",
	I1217 20:29:40.790977  522827 command_runner.go:130] > # ]
	I1217 20:29:40.790985  522827 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1217 20:29:40.790992  522827 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1217 20:29:40.790999  522827 command_runner.go:130] > # add_inheritable_capabilities = false
	I1217 20:29:40.791005  522827 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1217 20:29:40.791018  522827 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1217 20:29:40.791023  522827 command_runner.go:130] > default_sysctls = [
	I1217 20:29:40.791030  522827 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1217 20:29:40.791033  522827 command_runner.go:130] > ]
	I1217 20:29:40.791038  522827 command_runner.go:130] > # List of devices on the host that a
	I1217 20:29:40.791044  522827 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1217 20:29:40.791048  522827 command_runner.go:130] > # allowed_devices = [
	I1217 20:29:40.791055  522827 command_runner.go:130] > # 	"/dev/fuse",
	I1217 20:29:40.791059  522827 command_runner.go:130] > # 	"/dev/net/tun",
	I1217 20:29:40.791062  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791067  522827 command_runner.go:130] > # List of additional devices. specified as
	I1217 20:29:40.791081  522827 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1217 20:29:40.791088  522827 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1217 20:29:40.791096  522827 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1217 20:29:40.791103  522827 command_runner.go:130] > # additional_devices = [
	I1217 20:29:40.791110  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791115  522827 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1217 20:29:40.791119  522827 command_runner.go:130] > # cdi_spec_dirs = [
	I1217 20:29:40.791122  522827 command_runner.go:130] > # 	"/etc/cdi",
	I1217 20:29:40.791126  522827 command_runner.go:130] > # 	"/var/run/cdi",
	I1217 20:29:40.791130  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791136  522827 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1217 20:29:40.791144  522827 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1217 20:29:40.791149  522827 command_runner.go:130] > # Defaults to false.
	I1217 20:29:40.791156  522827 command_runner.go:130] > # device_ownership_from_security_context = false
	I1217 20:29:40.791164  522827 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1217 20:29:40.791178  522827 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1217 20:29:40.791181  522827 command_runner.go:130] > # hooks_dir = [
	I1217 20:29:40.791186  522827 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1217 20:29:40.791189  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791195  522827 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1217 20:29:40.791205  522827 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1217 20:29:40.791210  522827 command_runner.go:130] > # its default mounts from the following two files:
	I1217 20:29:40.791220  522827 command_runner.go:130] > #
	I1217 20:29:40.791229  522827 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1217 20:29:40.791240  522827 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1217 20:29:40.791248  522827 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1217 20:29:40.791251  522827 command_runner.go:130] > #
	I1217 20:29:40.791257  522827 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1217 20:29:40.791274  522827 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1217 20:29:40.791280  522827 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1217 20:29:40.791285  522827 command_runner.go:130] > #      only add mounts it finds in this file.
	I1217 20:29:40.791288  522827 command_runner.go:130] > #
	I1217 20:29:40.791292  522827 command_runner.go:130] > # default_mounts_file = ""
	I1217 20:29:40.791301  522827 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1217 20:29:40.791316  522827 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1217 20:29:40.791320  522827 command_runner.go:130] > # pids_limit = -1
	I1217 20:29:40.791326  522827 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1217 20:29:40.791335  522827 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1217 20:29:40.791343  522827 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1217 20:29:40.791354  522827 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1217 20:29:40.791357  522827 command_runner.go:130] > # log_size_max = -1
	I1217 20:29:40.791364  522827 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1217 20:29:40.791368  522827 command_runner.go:130] > # log_to_journald = false
	I1217 20:29:40.791374  522827 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1217 20:29:40.791383  522827 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1217 20:29:40.791391  522827 command_runner.go:130] > # Path to directory for container attach sockets.
	I1217 20:29:40.791396  522827 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1217 20:29:40.791401  522827 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1217 20:29:40.791405  522827 command_runner.go:130] > # bind_mount_prefix = ""
	I1217 20:29:40.791417  522827 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1217 20:29:40.791421  522827 command_runner.go:130] > # read_only = false
	I1217 20:29:40.791427  522827 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1217 20:29:40.791437  522827 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1217 20:29:40.791441  522827 command_runner.go:130] > # live configuration reload.
	I1217 20:29:40.791445  522827 command_runner.go:130] > # log_level = "info"
	I1217 20:29:40.791454  522827 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1217 20:29:40.791460  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.791466  522827 command_runner.go:130] > # log_filter = ""
	I1217 20:29:40.791472  522827 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1217 20:29:40.791481  522827 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1217 20:29:40.791485  522827 command_runner.go:130] > # separated by comma.
	I1217 20:29:40.791493  522827 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 20:29:40.791497  522827 command_runner.go:130] > # uid_mappings = ""
	I1217 20:29:40.791506  522827 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1217 20:29:40.791518  522827 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1217 20:29:40.791523  522827 command_runner.go:130] > # separated by comma.
	I1217 20:29:40.791530  522827 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 20:29:40.791535  522827 command_runner.go:130] > # gid_mappings = ""
	I1217 20:29:40.791540  522827 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1217 20:29:40.791549  522827 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1217 20:29:40.791556  522827 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1217 20:29:40.791565  522827 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 20:29:40.791572  522827 command_runner.go:130] > # minimum_mappable_uid = -1
	I1217 20:29:40.791604  522827 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1217 20:29:40.791611  522827 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1217 20:29:40.791617  522827 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1217 20:29:40.791627  522827 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 20:29:40.791634  522827 command_runner.go:130] > # minimum_mappable_gid = -1
	I1217 20:29:40.791640  522827 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1217 20:29:40.791648  522827 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1217 20:29:40.791662  522827 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1217 20:29:40.791666  522827 command_runner.go:130] > # ctr_stop_timeout = 30
	I1217 20:29:40.791672  522827 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1217 20:29:40.791680  522827 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1217 20:29:40.791685  522827 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1217 20:29:40.791690  522827 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1217 20:29:40.791694  522827 command_runner.go:130] > # drop_infra_ctr = true
	I1217 20:29:40.791700  522827 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1217 20:29:40.791712  522827 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1217 20:29:40.791723  522827 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1217 20:29:40.791727  522827 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1217 20:29:40.791734  522827 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1217 20:29:40.791743  522827 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1217 20:29:40.791749  522827 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1217 20:29:40.791756  522827 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1217 20:29:40.791760  522827 command_runner.go:130] > # shared_cpuset = ""
	I1217 20:29:40.791766  522827 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1217 20:29:40.791773  522827 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1217 20:29:40.791777  522827 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1217 20:29:40.791784  522827 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1217 20:29:40.791795  522827 command_runner.go:130] > # pinns_path = ""
	I1217 20:29:40.791801  522827 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1217 20:29:40.791807  522827 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1217 20:29:40.791814  522827 command_runner.go:130] > # enable_criu_support = true
	I1217 20:29:40.791819  522827 command_runner.go:130] > # Enable/disable the generation of the container,
	I1217 20:29:40.791826  522827 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1217 20:29:40.791833  522827 command_runner.go:130] > # enable_pod_events = false
	I1217 20:29:40.791839  522827 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1217 20:29:40.791845  522827 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1217 20:29:40.791849  522827 command_runner.go:130] > # default_runtime = "crun"
	I1217 20:29:40.791857  522827 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1217 20:29:40.791865  522827 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1217 20:29:40.791874  522827 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1217 20:29:40.791887  522827 command_runner.go:130] > # creation as a file is not desired either.
	I1217 20:29:40.791896  522827 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1217 20:29:40.791903  522827 command_runner.go:130] > # the hostname is being managed dynamically.
	I1217 20:29:40.791910  522827 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1217 20:29:40.791914  522827 command_runner.go:130] > # ]
	I1217 20:29:40.791920  522827 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1217 20:29:40.791929  522827 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1217 20:29:40.791935  522827 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1217 20:29:40.791943  522827 command_runner.go:130] > # Each entry in the table should follow the format:
	I1217 20:29:40.791946  522827 command_runner.go:130] > #
	I1217 20:29:40.791951  522827 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1217 20:29:40.791958  522827 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1217 20:29:40.791964  522827 command_runner.go:130] > # runtime_type = "oci"
	I1217 20:29:40.791969  522827 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1217 20:29:40.791976  522827 command_runner.go:130] > # inherit_default_runtime = false
	I1217 20:29:40.791981  522827 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1217 20:29:40.791986  522827 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1217 20:29:40.791990  522827 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1217 20:29:40.791996  522827 command_runner.go:130] > # monitor_env = []
	I1217 20:29:40.792001  522827 command_runner.go:130] > # privileged_without_host_devices = false
	I1217 20:29:40.792008  522827 command_runner.go:130] > # allowed_annotations = []
	I1217 20:29:40.792014  522827 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1217 20:29:40.792017  522827 command_runner.go:130] > # no_sync_log = false
	I1217 20:29:40.792021  522827 command_runner.go:130] > # default_annotations = {}
	I1217 20:29:40.792028  522827 command_runner.go:130] > # stream_websockets = false
	I1217 20:29:40.792034  522827 command_runner.go:130] > # seccomp_profile = ""
	I1217 20:29:40.792066  522827 command_runner.go:130] > # Where:
	I1217 20:29:40.792076  522827 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1217 20:29:40.792083  522827 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1217 20:29:40.792090  522827 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1217 20:29:40.792098  522827 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1217 20:29:40.792102  522827 command_runner.go:130] > #   in $PATH.
	I1217 20:29:40.792108  522827 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1217 20:29:40.792113  522827 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1217 20:29:40.792122  522827 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1217 20:29:40.792128  522827 command_runner.go:130] > #   state.
	I1217 20:29:40.792134  522827 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1217 20:29:40.792143  522827 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1217 20:29:40.792149  522827 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1217 20:29:40.792155  522827 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1217 20:29:40.792163  522827 command_runner.go:130] > #   the values from the default runtime on load time.
	I1217 20:29:40.792174  522827 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1217 20:29:40.792183  522827 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1217 20:29:40.792190  522827 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1217 20:29:40.792199  522827 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1217 20:29:40.792207  522827 command_runner.go:130] > #   The currently recognized values are:
	I1217 20:29:40.792214  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1217 20:29:40.792222  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1217 20:29:40.792231  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1217 20:29:40.792237  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1217 20:29:40.792251  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1217 20:29:40.792260  522827 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1217 20:29:40.792270  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1217 20:29:40.792277  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1217 20:29:40.792284  522827 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1217 20:29:40.792293  522827 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1217 20:29:40.792309  522827 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1217 20:29:40.792316  522827 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1217 20:29:40.792322  522827 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1217 20:29:40.792331  522827 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1217 20:29:40.792337  522827 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1217 20:29:40.792345  522827 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1217 20:29:40.792353  522827 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1217 20:29:40.792358  522827 command_runner.go:130] > #   deprecated option "conmon".
	I1217 20:29:40.792367  522827 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1217 20:29:40.792380  522827 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1217 20:29:40.792387  522827 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1217 20:29:40.792392  522827 command_runner.go:130] > #   should be moved to the container's cgroup
	I1217 20:29:40.792405  522827 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1217 20:29:40.792410  522827 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1217 20:29:40.792420  522827 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1217 20:29:40.792424  522827 command_runner.go:130] > #   conmon-rs by using:
	I1217 20:29:40.792432  522827 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1217 20:29:40.792441  522827 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1217 20:29:40.792454  522827 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1217 20:29:40.792465  522827 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1217 20:29:40.792471  522827 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1217 20:29:40.792485  522827 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1217 20:29:40.792497  522827 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1217 20:29:40.792506  522827 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1217 20:29:40.792515  522827 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1217 20:29:40.792524  522827 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1217 20:29:40.792529  522827 command_runner.go:130] > #   when a machine crash happens.
	I1217 20:29:40.792536  522827 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1217 20:29:40.792546  522827 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1217 20:29:40.792558  522827 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1217 20:29:40.792562  522827 command_runner.go:130] > #   seccomp profile for the runtime.
	I1217 20:29:40.792568  522827 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1217 20:29:40.792579  522827 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1217 20:29:40.792582  522827 command_runner.go:130] > #
	I1217 20:29:40.792587  522827 command_runner.go:130] > # Using the seccomp notifier feature:
	I1217 20:29:40.792590  522827 command_runner.go:130] > #
	I1217 20:29:40.792596  522827 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1217 20:29:40.792605  522827 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1217 20:29:40.792608  522827 command_runner.go:130] > #
	I1217 20:29:40.792615  522827 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1217 20:29:40.792630  522827 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1217 20:29:40.792633  522827 command_runner.go:130] > #
	I1217 20:29:40.792642  522827 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1217 20:29:40.792649  522827 command_runner.go:130] > # feature.
	I1217 20:29:40.792652  522827 command_runner.go:130] > #
	I1217 20:29:40.792658  522827 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1217 20:29:40.792667  522827 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1217 20:29:40.792673  522827 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1217 20:29:40.792679  522827 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1217 20:29:40.792688  522827 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1217 20:29:40.792692  522827 command_runner.go:130] > #
	I1217 20:29:40.792702  522827 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1217 20:29:40.792711  522827 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1217 20:29:40.792715  522827 command_runner.go:130] > #
	I1217 20:29:40.792721  522827 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1217 20:29:40.792727  522827 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1217 20:29:40.792732  522827 command_runner.go:130] > #
	I1217 20:29:40.792738  522827 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1217 20:29:40.792744  522827 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1217 20:29:40.792750  522827 command_runner.go:130] > # limitation.
	I1217 20:29:40.792754  522827 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1217 20:29:40.792758  522827 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1217 20:29:40.792761  522827 command_runner.go:130] > runtime_type = ""
	I1217 20:29:40.792765  522827 command_runner.go:130] > runtime_root = "/run/crun"
	I1217 20:29:40.792769  522827 command_runner.go:130] > inherit_default_runtime = false
	I1217 20:29:40.792774  522827 command_runner.go:130] > runtime_config_path = ""
	I1217 20:29:40.792781  522827 command_runner.go:130] > container_min_memory = ""
	I1217 20:29:40.792785  522827 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1217 20:29:40.792796  522827 command_runner.go:130] > monitor_cgroup = "pod"
	I1217 20:29:40.792801  522827 command_runner.go:130] > monitor_exec_cgroup = ""
	I1217 20:29:40.792804  522827 command_runner.go:130] > allowed_annotations = [
	I1217 20:29:40.792809  522827 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1217 20:29:40.792814  522827 command_runner.go:130] > ]
	I1217 20:29:40.792819  522827 command_runner.go:130] > privileged_without_host_devices = false
	I1217 20:29:40.792823  522827 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1217 20:29:40.792828  522827 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1217 20:29:40.792834  522827 command_runner.go:130] > runtime_type = ""
	I1217 20:29:40.792839  522827 command_runner.go:130] > runtime_root = "/run/runc"
	I1217 20:29:40.792842  522827 command_runner.go:130] > inherit_default_runtime = false
	I1217 20:29:40.792846  522827 command_runner.go:130] > runtime_config_path = ""
	I1217 20:29:40.792850  522827 command_runner.go:130] > container_min_memory = ""
	I1217 20:29:40.792856  522827 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1217 20:29:40.792860  522827 command_runner.go:130] > monitor_cgroup = "pod"
	I1217 20:29:40.792864  522827 command_runner.go:130] > monitor_exec_cgroup = ""
	I1217 20:29:40.792875  522827 command_runner.go:130] > privileged_without_host_devices = false
	I1217 20:29:40.792884  522827 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1217 20:29:40.792890  522827 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1217 20:29:40.792896  522827 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1217 20:29:40.792907  522827 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1217 20:29:40.792918  522827 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1217 20:29:40.792930  522827 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1217 20:29:40.792940  522827 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1217 20:29:40.792947  522827 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1217 20:29:40.792958  522827 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1217 20:29:40.792975  522827 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1217 20:29:40.792980  522827 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1217 20:29:40.792998  522827 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1217 20:29:40.793004  522827 command_runner.go:130] > # Example:
	I1217 20:29:40.793009  522827 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1217 20:29:40.793014  522827 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1217 20:29:40.793019  522827 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1217 20:29:40.793025  522827 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1217 20:29:40.793029  522827 command_runner.go:130] > # cpuset = "0-1"
	I1217 20:29:40.793033  522827 command_runner.go:130] > # cpushares = "5"
	I1217 20:29:40.793039  522827 command_runner.go:130] > # cpuquota = "1000"
	I1217 20:29:40.793043  522827 command_runner.go:130] > # cpuperiod = "100000"
	I1217 20:29:40.793050  522827 command_runner.go:130] > # cpulimit = "35"
	I1217 20:29:40.793059  522827 command_runner.go:130] > # Where:
	I1217 20:29:40.793066  522827 command_runner.go:130] > # The workload name is workload-type.
	I1217 20:29:40.793073  522827 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1217 20:29:40.793079  522827 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1217 20:29:40.793087  522827 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1217 20:29:40.793096  522827 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1217 20:29:40.793101  522827 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1217 20:29:40.793106  522827 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1217 20:29:40.793116  522827 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1217 20:29:40.793122  522827 command_runner.go:130] > # Default value is set to true
	I1217 20:29:40.793132  522827 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1217 20:29:40.793141  522827 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1217 20:29:40.793146  522827 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1217 20:29:40.793150  522827 command_runner.go:130] > # Default value is set to 'false'
	I1217 20:29:40.793155  522827 command_runner.go:130] > # disable_hostport_mapping = false
	I1217 20:29:40.793163  522827 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1217 20:29:40.793172  522827 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1217 20:29:40.793175  522827 command_runner.go:130] > # timezone = ""
	I1217 20:29:40.793185  522827 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1217 20:29:40.793188  522827 command_runner.go:130] > #
	I1217 20:29:40.793194  522827 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1217 20:29:40.793212  522827 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1217 20:29:40.793215  522827 command_runner.go:130] > [crio.image]
	I1217 20:29:40.793222  522827 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1217 20:29:40.793229  522827 command_runner.go:130] > # default_transport = "docker://"
	I1217 20:29:40.793236  522827 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1217 20:29:40.793243  522827 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1217 20:29:40.793249  522827 command_runner.go:130] > # global_auth_file = ""
	I1217 20:29:40.793255  522827 command_runner.go:130] > # The image used to instantiate infra containers.
	I1217 20:29:40.793260  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.793264  522827 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1217 20:29:40.793271  522827 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1217 20:29:40.793277  522827 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1217 20:29:40.793283  522827 command_runner.go:130] > # This option supports live configuration reload.
	I1217 20:29:40.793289  522827 command_runner.go:130] > # pause_image_auth_file = ""
	I1217 20:29:40.793295  522827 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1217 20:29:40.793304  522827 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1217 20:29:40.793311  522827 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1217 20:29:40.793317  522827 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1217 20:29:40.793323  522827 command_runner.go:130] > # pause_command = "/pause"
	I1217 20:29:40.793329  522827 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1217 20:29:40.793335  522827 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1217 20:29:40.793342  522827 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1217 20:29:40.793351  522827 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1217 20:29:40.793357  522827 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1217 20:29:40.793372  522827 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1217 20:29:40.793376  522827 command_runner.go:130] > # pinned_images = [
	I1217 20:29:40.793379  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793388  522827 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1217 20:29:40.793401  522827 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1217 20:29:40.793408  522827 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1217 20:29:40.793416  522827 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1217 20:29:40.793422  522827 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1217 20:29:40.793426  522827 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1217 20:29:40.793432  522827 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1217 20:29:40.793439  522827 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1217 20:29:40.793445  522827 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1217 20:29:40.793456  522827 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1217 20:29:40.793462  522827 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1217 20:29:40.793467  522827 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1217 20:29:40.793473  522827 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1217 20:29:40.793479  522827 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1217 20:29:40.793483  522827 command_runner.go:130] > # changing them here.
	I1217 20:29:40.793488  522827 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1217 20:29:40.793492  522827 command_runner.go:130] > # insecure_registries = [
	I1217 20:29:40.793495  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793514  522827 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1217 20:29:40.793522  522827 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1217 20:29:40.793526  522827 command_runner.go:130] > # image_volumes = "mkdir"
	I1217 20:29:40.793532  522827 command_runner.go:130] > # Temporary directory to use for storing big files
	I1217 20:29:40.793538  522827 command_runner.go:130] > # big_files_temporary_dir = ""
	I1217 20:29:40.793544  522827 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1217 20:29:40.793554  522827 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1217 20:29:40.793558  522827 command_runner.go:130] > # auto_reload_registries = false
	I1217 20:29:40.793564  522827 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1217 20:29:40.793572  522827 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1217 20:29:40.793584  522827 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1217 20:29:40.793589  522827 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1217 20:29:40.793594  522827 command_runner.go:130] > # The mode of short name resolution.
	I1217 20:29:40.793600  522827 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1217 20:29:40.793607  522827 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1217 20:29:40.793613  522827 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1217 20:29:40.793624  522827 command_runner.go:130] > # short_name_mode = "enforcing"
	I1217 20:29:40.793631  522827 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1217 20:29:40.793636  522827 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1217 20:29:40.793643  522827 command_runner.go:130] > # oci_artifact_mount_support = true
	I1217 20:29:40.793649  522827 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1217 20:29:40.793653  522827 command_runner.go:130] > # CNI plugins.
	I1217 20:29:40.793662  522827 command_runner.go:130] > [crio.network]
	I1217 20:29:40.793669  522827 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1217 20:29:40.793674  522827 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1217 20:29:40.793678  522827 command_runner.go:130] > # cni_default_network = ""
	I1217 20:29:40.793683  522827 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1217 20:29:40.793688  522827 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1217 20:29:40.793695  522827 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1217 20:29:40.793701  522827 command_runner.go:130] > # plugin_dirs = [
	I1217 20:29:40.793705  522827 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1217 20:29:40.793708  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793712  522827 command_runner.go:130] > # List of included pod metrics.
	I1217 20:29:40.793716  522827 command_runner.go:130] > # included_pod_metrics = [
	I1217 20:29:40.793721  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793727  522827 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1217 20:29:40.793733  522827 command_runner.go:130] > [crio.metrics]
	I1217 20:29:40.793738  522827 command_runner.go:130] > # Globally enable or disable metrics support.
	I1217 20:29:40.793742  522827 command_runner.go:130] > # enable_metrics = false
	I1217 20:29:40.793749  522827 command_runner.go:130] > # Specify enabled metrics collectors.
	I1217 20:29:40.793754  522827 command_runner.go:130] > # Per default all metrics are enabled.
	I1217 20:29:40.793760  522827 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1217 20:29:40.793769  522827 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1217 20:29:40.793781  522827 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1217 20:29:40.793788  522827 command_runner.go:130] > # metrics_collectors = [
	I1217 20:29:40.793792  522827 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1217 20:29:40.793796  522827 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1217 20:29:40.793801  522827 command_runner.go:130] > # 	"containers_oom_total",
	I1217 20:29:40.793810  522827 command_runner.go:130] > # 	"processes_defunct",
	I1217 20:29:40.793814  522827 command_runner.go:130] > # 	"operations_total",
	I1217 20:29:40.793818  522827 command_runner.go:130] > # 	"operations_latency_seconds",
	I1217 20:29:40.793825  522827 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1217 20:29:40.793830  522827 command_runner.go:130] > # 	"operations_errors_total",
	I1217 20:29:40.793834  522827 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1217 20:29:40.793838  522827 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1217 20:29:40.793843  522827 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1217 20:29:40.793847  522827 command_runner.go:130] > # 	"image_pulls_success_total",
	I1217 20:29:40.793851  522827 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1217 20:29:40.793857  522827 command_runner.go:130] > # 	"containers_oom_count_total",
	I1217 20:29:40.793862  522827 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1217 20:29:40.793869  522827 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1217 20:29:40.793873  522827 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1217 20:29:40.793876  522827 command_runner.go:130] > # ]
	I1217 20:29:40.793882  522827 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1217 20:29:40.793888  522827 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1217 20:29:40.793894  522827 command_runner.go:130] > # The port on which the metrics server will listen.
	I1217 20:29:40.793898  522827 command_runner.go:130] > # metrics_port = 9090
	I1217 20:29:40.793905  522827 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1217 20:29:40.793909  522827 command_runner.go:130] > # metrics_socket = ""
	I1217 20:29:40.793920  522827 command_runner.go:130] > # The certificate for the secure metrics server.
	I1217 20:29:40.793926  522827 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1217 20:29:40.793932  522827 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1217 20:29:40.793939  522827 command_runner.go:130] > # certificate on any modification event.
	I1217 20:29:40.793942  522827 command_runner.go:130] > # metrics_cert = ""
	I1217 20:29:40.793947  522827 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1217 20:29:40.793959  522827 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1217 20:29:40.793967  522827 command_runner.go:130] > # metrics_key = ""
	I1217 20:29:40.793980  522827 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1217 20:29:40.793983  522827 command_runner.go:130] > [crio.tracing]
	I1217 20:29:40.793989  522827 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1217 20:29:40.793996  522827 command_runner.go:130] > # enable_tracing = false
	I1217 20:29:40.794002  522827 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1217 20:29:40.794006  522827 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1217 20:29:40.794015  522827 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1217 20:29:40.794020  522827 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1217 20:29:40.794024  522827 command_runner.go:130] > # CRI-O NRI configuration.
	I1217 20:29:40.794027  522827 command_runner.go:130] > [crio.nri]
	I1217 20:29:40.794031  522827 command_runner.go:130] > # Globally enable or disable NRI.
	I1217 20:29:40.794035  522827 command_runner.go:130] > # enable_nri = true
	I1217 20:29:40.794039  522827 command_runner.go:130] > # NRI socket to listen on.
	I1217 20:29:40.794045  522827 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1217 20:29:40.794050  522827 command_runner.go:130] > # NRI plugin directory to use.
	I1217 20:29:40.794061  522827 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1217 20:29:40.794066  522827 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1217 20:29:40.794073  522827 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1217 20:29:40.794082  522827 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1217 20:29:40.794150  522827 command_runner.go:130] > # nri_disable_connections = false
	I1217 20:29:40.794172  522827 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1217 20:29:40.794178  522827 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1217 20:29:40.794186  522827 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1217 20:29:40.794191  522827 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1217 20:29:40.794200  522827 command_runner.go:130] > # NRI default validator configuration.
	I1217 20:29:40.794211  522827 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1217 20:29:40.794218  522827 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1217 20:29:40.794225  522827 command_runner.go:130] > # can be restricted/rejected:
	I1217 20:29:40.794229  522827 command_runner.go:130] > # - OCI hook injection
	I1217 20:29:40.794235  522827 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1217 20:29:40.794240  522827 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1217 20:29:40.794245  522827 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1217 20:29:40.794252  522827 command_runner.go:130] > # - adjustment of linux namespaces
	I1217 20:29:40.794263  522827 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1217 20:29:40.794277  522827 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1217 20:29:40.794284  522827 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1217 20:29:40.794295  522827 command_runner.go:130] > #
	I1217 20:29:40.794299  522827 command_runner.go:130] > # [crio.nri.default_validator]
	I1217 20:29:40.794304  522827 command_runner.go:130] > # nri_enable_default_validator = false
	I1217 20:29:40.794312  522827 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1217 20:29:40.794318  522827 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1217 20:29:40.794326  522827 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1217 20:29:40.794338  522827 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1217 20:29:40.794343  522827 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1217 20:29:40.794347  522827 command_runner.go:130] > # nri_validator_required_plugins = [
	I1217 20:29:40.794352  522827 command_runner.go:130] > # ]
	I1217 20:29:40.794359  522827 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1217 20:29:40.794368  522827 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1217 20:29:40.794373  522827 command_runner.go:130] > [crio.stats]
	I1217 20:29:40.794386  522827 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1217 20:29:40.794392  522827 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1217 20:29:40.794398  522827 command_runner.go:130] > # stats_collection_period = 0
	I1217 20:29:40.794405  522827 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1217 20:29:40.794411  522827 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1217 20:29:40.794417  522827 command_runner.go:130] > # collection_period = 0
	I1217 20:29:40.794552  522827 cni.go:84] Creating CNI manager for ""
	I1217 20:29:40.794571  522827 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:29:40.794583  522827 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:29:40.794609  522827 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-655452 NodeName:functional-655452 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:29:40.794745  522827 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-655452"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:29:40.794827  522827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 20:29:40.802768  522827 command_runner.go:130] > kubeadm
	I1217 20:29:40.802789  522827 command_runner.go:130] > kubectl
	I1217 20:29:40.802794  522827 command_runner.go:130] > kubelet
	I1217 20:29:40.802809  522827 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:29:40.802895  522827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:29:40.810641  522827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 20:29:40.826893  522827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 20:29:40.841576  522827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1217 20:29:40.856014  522827 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:29:40.859640  522827 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 20:29:40.860204  522827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:29:40.970449  522827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:29:41.821239  522827 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452 for IP: 192.168.49.2
	I1217 20:29:41.821266  522827 certs.go:195] generating shared ca certs ...
	I1217 20:29:41.821284  522827 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:29:41.821441  522827 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:29:41.821492  522827 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:29:41.821509  522827 certs.go:257] generating profile certs ...
	I1217 20:29:41.821619  522827 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key
	I1217 20:29:41.821682  522827 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key.aa95dda5
	I1217 20:29:41.821733  522827 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key
	I1217 20:29:41.821747  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 20:29:41.821765  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 20:29:41.821780  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 20:29:41.821791  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 20:29:41.821805  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 20:29:41.821817  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 20:29:41.821831  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 20:29:41.821846  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 20:29:41.821894  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:29:41.821945  522827 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:29:41.821959  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:29:41.821996  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:29:41.822031  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:29:41.822058  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:29:41.822104  522827 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:29:41.822138  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:41.822159  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:29:41.822175  522827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:29:41.822802  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:29:41.845035  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:29:41.868336  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:29:41.901049  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:29:41.918871  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 20:29:41.937168  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 20:29:41.954450  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:29:41.971684  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:29:41.988884  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:29:42.008645  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:29:42.029398  522827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:29:42.047332  522827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:29:42.061588  522827 ssh_runner.go:195] Run: openssl version
	I1217 20:29:42.068928  522827 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 20:29:42.069476  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.078814  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:29:42.088990  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.093920  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.093987  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.094097  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:29:42.137804  522827 command_runner.go:130] > 51391683
	I1217 20:29:42.138358  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:29:42.147537  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.157061  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:29:42.166751  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.171759  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.171865  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.172010  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:29:42.222515  522827 command_runner.go:130] > 3ec20f2e
	I1217 20:29:42.222600  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:29:42.231935  522827 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.242232  522827 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:29:42.250913  522827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.255543  522827 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.255609  522827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.255686  522827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:29:42.298361  522827 command_runner.go:130] > b5213941
	I1217 20:29:42.298457  522827 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:29:42.307141  522827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:29:42.311232  522827 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:29:42.311338  522827 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 20:29:42.311364  522827 command_runner.go:130] > Device: 259,1	Inode: 1313050     Links: 1
	I1217 20:29:42.311390  522827 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 20:29:42.311425  522827 command_runner.go:130] > Access: 2025-12-17 20:25:34.088053460 +0000
	I1217 20:29:42.311446  522827 command_runner.go:130] > Modify: 2025-12-17 20:21:29.777917427 +0000
	I1217 20:29:42.311461  522827 command_runner.go:130] > Change: 2025-12-17 20:21:29.777917427 +0000
	I1217 20:29:42.311467  522827 command_runner.go:130] >  Birth: 2025-12-17 20:21:29.777917427 +0000
	I1217 20:29:42.311555  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:29:42.352885  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.353302  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:29:42.407045  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.407143  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:29:42.455863  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.456326  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:29:42.505636  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.506227  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:29:42.548331  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.548862  522827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:29:42.590705  522827 command_runner.go:130] > Certificate will not expire
	I1217 20:29:42.591277  522827 kubeadm.go:401] StartCluster: {Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:29:42.591354  522827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:29:42.591425  522827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:29:42.618986  522827 cri.go:89] found id: ""
	I1217 20:29:42.619059  522827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:29:42.626323  522827 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 20:29:42.626347  522827 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 20:29:42.626355  522827 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 20:29:42.627403  522827 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 20:29:42.627425  522827 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 20:29:42.627476  522827 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 20:29:42.635033  522827 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:29:42.635439  522827 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-655452" does not appear in /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:42.635552  522827 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-485134/kubeconfig needs updating (will repair): [kubeconfig missing "functional-655452" cluster setting kubeconfig missing "functional-655452" context setting]
	I1217 20:29:42.635844  522827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/kubeconfig: {Name:mkc7214551fa855d6d4575d627c3ecd54291b351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:29:42.636278  522827 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:42.636437  522827 kapi.go:59] client config for functional-655452: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:29:42.636955  522827 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 20:29:42.636974  522827 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 20:29:42.636979  522827 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 20:29:42.636984  522827 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 20:29:42.636988  522827 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 20:29:42.637054  522827 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 20:29:42.637345  522827 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 20:29:42.646583  522827 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 20:29:42.646685  522827 kubeadm.go:602] duration metric: took 19.253149ms to restartPrimaryControlPlane
	I1217 20:29:42.646744  522827 kubeadm.go:403] duration metric: took 55.459532ms to StartCluster
	I1217 20:29:42.646789  522827 settings.go:142] acquiring lock: {Name:mk91fac3a58d91b836cd0701183ef7cc1e571672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:29:42.646894  522827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:42.647795  522827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/kubeconfig: {Name:mkc7214551fa855d6d4575d627c3ecd54291b351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:29:42.648137  522827 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:29:42.648371  522827 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:29:42.648423  522827 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:29:42.648485  522827 addons.go:70] Setting storage-provisioner=true in profile "functional-655452"
	I1217 20:29:42.648497  522827 addons.go:239] Setting addon storage-provisioner=true in "functional-655452"
	I1217 20:29:42.648521  522827 host.go:66] Checking if "functional-655452" exists ...
	I1217 20:29:42.648902  522827 addons.go:70] Setting default-storageclass=true in profile "functional-655452"
	I1217 20:29:42.648999  522827 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-655452"
	I1217 20:29:42.649042  522827 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:29:42.649424  522827 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:29:42.653921  522827 out.go:179] * Verifying Kubernetes components...
	I1217 20:29:42.656821  522827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:29:42.689834  522827 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:29:42.690004  522827 kapi.go:59] client config for functional-655452: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:29:42.690276  522827 addons.go:239] Setting addon default-storageclass=true in "functional-655452"
	I1217 20:29:42.690305  522827 host.go:66] Checking if "functional-655452" exists ...
	I1217 20:29:42.690860  522827 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:29:42.692598  522827 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 20:29:42.699772  522827 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:42.699803  522827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 20:29:42.699871  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:42.735975  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:42.743517  522827 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:42.743543  522827 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:29:42.743664  522827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:29:42.778325  522827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:29:42.848025  522827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:29:42.860324  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:42.899199  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:43.321927  522827 node_ready.go:35] waiting up to 6m0s for node "functional-655452" to be "Ready" ...
	I1217 20:29:43.322118  522827 type.go:168] "Request Body" body=""
	I1217 20:29:43.322203  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:43.322465  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.322528  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.322567  522827 retry.go:31] will retry after 172.422642ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.322648  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.322689  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.322715  522827 retry.go:31] will retry after 167.097093ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.322809  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:43.490380  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:43.496229  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:43.581353  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.581433  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.581460  522827 retry.go:31] will retry after 331.036154ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.581553  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.581605  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.581639  522827 retry.go:31] will retry after 400.38477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.822877  522827 type.go:168] "Request Body" body=""
	I1217 20:29:43.822949  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:43.823300  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:43.912722  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:43.970874  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:43.974629  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.974708  522827 retry.go:31] will retry after 462.319516ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:43.982922  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:44.044566  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:44.048683  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.048723  522827 retry.go:31] will retry after 443.115947ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.323122  522827 type.go:168] "Request Body" body=""
	I1217 20:29:44.323200  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:44.323555  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:44.437879  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:44.492501  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:44.499443  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:44.499482  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.499520  522827 retry.go:31] will retry after 1.265386144s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.551004  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:44.551045  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.551085  522827 retry.go:31] will retry after 774.139673ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:44.822655  522827 type.go:168] "Request Body" body=""
	I1217 20:29:44.822811  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:44.823195  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:45.323027  522827 type.go:168] "Request Body" body=""
	I1217 20:29:45.323135  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:45.323507  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:45.323621  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:45.325715  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:45.391952  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:45.395668  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:45.395750  522827 retry.go:31] will retry after 1.529541916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:45.765134  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:45.822845  522827 type.go:168] "Request Body" body=""
	I1217 20:29:45.822973  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:45.823280  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:45.823537  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:45.827173  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:45.827206  522827 retry.go:31] will retry after 637.037829ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:46.322836  522827 type.go:168] "Request Body" body=""
	I1217 20:29:46.322927  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:46.323203  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:46.464492  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:46.525009  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:46.525062  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:46.525083  522827 retry.go:31] will retry after 1.110973738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:46.822245  522827 type.go:168] "Request Body" body=""
	I1217 20:29:46.822323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:46.822662  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:46.926099  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:46.987960  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:46.988006  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:46.988028  522827 retry.go:31] will retry after 1.385710629s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:47.322640  522827 type.go:168] "Request Body" body=""
	I1217 20:29:47.322715  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:47.323041  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:47.636709  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:47.697205  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:47.697243  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:47.697264  522827 retry.go:31] will retry after 4.090194732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:47.822497  522827 type.go:168] "Request Body" body=""
	I1217 20:29:47.822589  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:47.822932  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:47.822989  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:48.322659  522827 type.go:168] "Request Body" body=""
	I1217 20:29:48.322736  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:48.323019  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:48.374352  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:48.431979  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:48.435409  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:48.435442  522827 retry.go:31] will retry after 3.099398493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:48.823142  522827 type.go:168] "Request Body" body=""
	I1217 20:29:48.823220  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:48.823522  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:49.322226  522827 type.go:168] "Request Body" body=""
	I1217 20:29:49.322316  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:49.322645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:49.822280  522827 type.go:168] "Request Body" body=""
	I1217 20:29:49.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:49.822709  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:50.322247  522827 type.go:168] "Request Body" body=""
	I1217 20:29:50.322328  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:50.322668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:50.322721  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:50.822373  522827 type.go:168] "Request Body" body=""
	I1217 20:29:50.822449  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:50.822719  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:51.322273  522827 type.go:168] "Request Body" body=""
	I1217 20:29:51.322361  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:51.322682  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:51.535119  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:51.608419  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:51.608461  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:51.608504  522827 retry.go:31] will retry after 5.948755722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:51.787984  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:51.822458  522827 type.go:168] "Request Body" body=""
	I1217 20:29:51.822538  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:51.822817  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:51.846041  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:51.846085  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:51.846105  522827 retry.go:31] will retry after 5.856724643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:52.322893  522827 type.go:168] "Request Body" body=""
	I1217 20:29:52.322982  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:52.323271  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:52.323320  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:52.822254  522827 type.go:168] "Request Body" body=""
	I1217 20:29:52.822329  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:52.822653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:53.322391  522827 type.go:168] "Request Body" body=""
	I1217 20:29:53.322479  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:53.322825  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:53.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:29:53.822273  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:53.822595  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:54.322265  522827 type.go:168] "Request Body" body=""
	I1217 20:29:54.322351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:54.322683  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:54.822243  522827 type.go:168] "Request Body" body=""
	I1217 20:29:54.822322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:54.822646  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:54.822705  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:55.322383  522827 type.go:168] "Request Body" body=""
	I1217 20:29:55.322466  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:55.322739  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:55.822262  522827 type.go:168] "Request Body" body=""
	I1217 20:29:55.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:55.822680  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:56.322404  522827 type.go:168] "Request Body" body=""
	I1217 20:29:56.322493  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:56.322874  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:56.822564  522827 type.go:168] "Request Body" body=""
	I1217 20:29:56.822678  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:56.823046  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:56.823109  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:57.322771  522827 type.go:168] "Request Body" body=""
	I1217 20:29:57.322846  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:57.323141  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:57.557506  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:29:57.638482  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:57.642516  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:57.642548  522827 retry.go:31] will retry after 4.405911356s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:57.703796  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:29:57.764881  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:29:57.764928  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:57.764950  522827 retry.go:31] will retry after 7.580168113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:29:57.823128  522827 type.go:168] "Request Body" body=""
	I1217 20:29:57.823235  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:57.823556  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:58.322216  522827 type.go:168] "Request Body" body=""
	I1217 20:29:58.322291  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:58.322579  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:58.822288  522827 type.go:168] "Request Body" body=""
	I1217 20:29:58.822434  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:58.822838  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:29:59.322555  522827 type.go:168] "Request Body" body=""
	I1217 20:29:59.322632  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:59.322948  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:29:59.323004  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:29:59.822770  522827 type.go:168] "Request Body" body=""
	I1217 20:29:59.822844  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:29:59.823119  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:00.323032  522827 type.go:168] "Request Body" body=""
	I1217 20:30:00.323116  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:00.323489  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:00.822257  522827 type.go:168] "Request Body" body=""
	I1217 20:30:00.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:00.822678  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:01.322375  522827 type.go:168] "Request Body" body=""
	I1217 20:30:01.322459  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:01.322808  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:01.822288  522827 type.go:168] "Request Body" body=""
	I1217 20:30:01.822382  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:01.822690  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:01.822741  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:02.049201  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:30:02.136097  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:02.136138  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:02.136156  522827 retry.go:31] will retry after 5.567678678s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:02.322750  522827 type.go:168] "Request Body" body=""
	I1217 20:30:02.322843  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:02.323173  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:02.822939  522827 type.go:168] "Request Body" body=""
	I1217 20:30:02.823008  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:02.823350  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:03.323175  522827 type.go:168] "Request Body" body=""
	I1217 20:30:03.323258  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:03.323612  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:03.822172  522827 type.go:168] "Request Body" body=""
	I1217 20:30:03.822257  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:03.822603  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:04.322314  522827 type.go:168] "Request Body" body=""
	I1217 20:30:04.322401  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:04.322723  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:04.322781  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:04.822229  522827 type.go:168] "Request Body" body=""
	I1217 20:30:04.822306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:04.822675  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:05.322398  522827 type.go:168] "Request Body" body=""
	I1217 20:30:05.322478  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:05.322839  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:05.346115  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:30:05.408232  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:05.408289  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:05.408313  522827 retry.go:31] will retry after 10.078206747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:05.822854  522827 type.go:168] "Request Body" body=""
	I1217 20:30:05.822945  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:05.823317  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:06.323102  522827 type.go:168] "Request Body" body=""
	I1217 20:30:06.323172  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:06.323471  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:06.323519  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:06.822291  522827 type.go:168] "Request Body" body=""
	I1217 20:30:06.822371  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:06.822701  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:07.322790  522827 type.go:168] "Request Body" body=""
	I1217 20:30:07.322867  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:07.323162  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:07.703974  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:30:07.764647  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:07.764701  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:07.764721  522827 retry.go:31] will retry after 19.009086903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:07.822843  522827 type.go:168] "Request Body" body=""
	I1217 20:30:07.822915  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:07.823267  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:08.323079  522827 type.go:168] "Request Body" body=""
	I1217 20:30:08.323151  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:08.323471  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:08.822188  522827 type.go:168] "Request Body" body=""
	I1217 20:30:08.822263  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:08.822521  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:08.822572  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:09.322241  522827 type.go:168] "Request Body" body=""
	I1217 20:30:09.322340  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:09.322671  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:09.822374  522827 type.go:168] "Request Body" body=""
	I1217 20:30:09.822457  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:09.822805  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:10.322483  522827 type.go:168] "Request Body" body=""
	I1217 20:30:10.322552  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:10.322843  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:10.822281  522827 type.go:168] "Request Body" body=""
	I1217 20:30:10.822358  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:10.822652  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:10.822700  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:11.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:30:11.322352  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:11.322672  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:11.822207  522827 type.go:168] "Request Body" body=""
	I1217 20:30:11.822286  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:11.822549  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:12.322594  522827 type.go:168] "Request Body" body=""
	I1217 20:30:12.322674  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:12.322988  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:12.822976  522827 type.go:168] "Request Body" body=""
	I1217 20:30:12.823050  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:12.823410  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:12.823463  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:13.322144  522827 type.go:168] "Request Body" body=""
	I1217 20:30:13.322232  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:13.322521  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:13.822230  522827 type.go:168] "Request Body" body=""
	I1217 20:30:13.822307  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:13.822623  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:14.322243  522827 type.go:168] "Request Body" body=""
	I1217 20:30:14.322319  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:14.322666  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:14.822203  522827 type.go:168] "Request Body" body=""
	I1217 20:30:14.822311  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:14.822605  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:15.322246  522827 type.go:168] "Request Body" body=""
	I1217 20:30:15.322320  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:15.322647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:15.322700  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:15.487149  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:30:15.557091  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:15.557136  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:15.557155  522827 retry.go:31] will retry after 12.964696684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:15.822271  522827 type.go:168] "Request Body" body=""
	I1217 20:30:15.822351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:15.822662  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:16.322350  522827 type.go:168] "Request Body" body=""
	I1217 20:30:16.322453  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:16.322760  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:16.822273  522827 type.go:168] "Request Body" body=""
	I1217 20:30:16.822358  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:16.822736  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:17.322687  522827 type.go:168] "Request Body" body=""
	I1217 20:30:17.322762  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:17.323107  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:17.323170  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:17.822929  522827 type.go:168] "Request Body" body=""
	I1217 20:30:17.823010  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:17.823369  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:18.322156  522827 type.go:168] "Request Body" body=""
	I1217 20:30:18.322228  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:18.322549  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:18.822291  522827 type.go:168] "Request Body" body=""
	I1217 20:30:18.822375  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:18.822749  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:19.322331  522827 type.go:168] "Request Body" body=""
	I1217 20:30:19.322407  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:19.322669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:19.822262  522827 type.go:168] "Request Body" body=""
	I1217 20:30:19.822338  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:19.822665  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:19.822723  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:20.322409  522827 type.go:168] "Request Body" body=""
	I1217 20:30:20.322504  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:20.322816  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:20.822195  522827 type.go:168] "Request Body" body=""
	I1217 20:30:20.822272  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:20.822589  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:21.322282  522827 type.go:168] "Request Body" body=""
	I1217 20:30:21.322360  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:21.322714  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:21.822462  522827 type.go:168] "Request Body" body=""
	I1217 20:30:21.822537  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:21.822878  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:21.822935  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:22.322758  522827 type.go:168] "Request Body" body=""
	I1217 20:30:22.322831  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:22.323144  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:22.823099  522827 type.go:168] "Request Body" body=""
	I1217 20:30:22.823175  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:22.823543  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:23.322157  522827 type.go:168] "Request Body" body=""
	I1217 20:30:23.322244  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:23.322584  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:23.822276  522827 type.go:168] "Request Body" body=""
	I1217 20:30:23.822380  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:23.822675  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:24.322366  522827 type.go:168] "Request Body" body=""
	I1217 20:30:24.322440  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:24.322775  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:24.322830  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:24.822521  522827 type.go:168] "Request Body" body=""
	I1217 20:30:24.822606  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:24.822923  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:25.322198  522827 type.go:168] "Request Body" body=""
	I1217 20:30:25.322283  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:25.322621  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:25.822315  522827 type.go:168] "Request Body" body=""
	I1217 20:30:25.822391  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:25.822741  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:26.322318  522827 type.go:168] "Request Body" body=""
	I1217 20:30:26.322399  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:26.322716  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:26.774084  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:30:26.822641  522827 type.go:168] "Request Body" body=""
	I1217 20:30:26.822719  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:26.822976  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:26.823028  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:26.837910  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:26.841500  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:26.841530  522827 retry.go:31] will retry after 11.131595667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:27.322446  522827 type.go:168] "Request Body" body=""
	I1217 20:30:27.322527  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:27.322849  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:27.822542  522827 type.go:168] "Request Body" body=""
	I1217 20:30:27.822619  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:27.822938  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:28.322188  522827 type.go:168] "Request Body" body=""
	I1217 20:30:28.322255  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:28.322560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:28.523062  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:30:28.580613  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:28.584486  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:28.584522  522827 retry.go:31] will retry after 27.188888106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:28.822927  522827 type.go:168] "Request Body" body=""
	I1217 20:30:28.823014  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:28.823356  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:28.823415  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:29.323074  522827 type.go:168] "Request Body" body=""
	I1217 20:30:29.323146  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:29.323504  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:29.822233  522827 type.go:168] "Request Body" body=""
	I1217 20:30:29.822317  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:29.822584  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:30.322255  522827 type.go:168] "Request Body" body=""
	I1217 20:30:30.322340  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:30.322682  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:30.822323  522827 type.go:168] "Request Body" body=""
	I1217 20:30:30.822402  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:30.822702  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:31.322380  522827 type.go:168] "Request Body" body=""
	I1217 20:30:31.322461  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:31.322751  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:31.322805  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:31.822248  522827 type.go:168] "Request Body" body=""
	I1217 20:30:31.822328  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:31.822687  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:32.322527  522827 type.go:168] "Request Body" body=""
	I1217 20:30:32.322604  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:32.322970  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:32.822789  522827 type.go:168] "Request Body" body=""
	I1217 20:30:32.822862  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:32.823113  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:33.322853  522827 type.go:168] "Request Body" body=""
	I1217 20:30:33.322933  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:33.323261  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:33.323318  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:33.823136  522827 type.go:168] "Request Body" body=""
	I1217 20:30:33.823234  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:33.823604  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:34.322280  522827 type.go:168] "Request Body" body=""
	I1217 20:30:34.322349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:34.322657  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:34.822228  522827 type.go:168] "Request Body" body=""
	I1217 20:30:34.822303  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:34.822672  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:35.322420  522827 type.go:168] "Request Body" body=""
	I1217 20:30:35.322511  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:35.322908  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:35.822529  522827 type.go:168] "Request Body" body=""
	I1217 20:30:35.822596  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:35.822853  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:35.822892  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:36.322252  522827 type.go:168] "Request Body" body=""
	I1217 20:30:36.322343  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:36.322684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:36.822246  522827 type.go:168] "Request Body" body=""
	I1217 20:30:36.822323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:36.822689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:37.322549  522827 type.go:168] "Request Body" body=""
	I1217 20:30:37.322619  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:37.322889  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:37.822237  522827 type.go:168] "Request Body" body=""
	I1217 20:30:37.822321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:37.822668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:37.974039  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:30:38.040817  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:38.040869  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:38.040889  522827 retry.go:31] will retry after 31.049103728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:38.322172  522827 type.go:168] "Request Body" body=""
	I1217 20:30:38.322246  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:38.322560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:38.322614  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:38.822324  522827 type.go:168] "Request Body" body=""
	I1217 20:30:38.822398  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:38.822724  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:39.322269  522827 type.go:168] "Request Body" body=""
	I1217 20:30:39.322343  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:39.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:39.822351  522827 type.go:168] "Request Body" body=""
	I1217 20:30:39.822429  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:39.822784  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:40.322493  522827 type.go:168] "Request Body" body=""
	I1217 20:30:40.322565  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:40.322832  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:40.322881  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:40.822257  522827 type.go:168] "Request Body" body=""
	I1217 20:30:40.822336  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:40.822683  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:41.322402  522827 type.go:168] "Request Body" body=""
	I1217 20:30:41.322476  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:41.322798  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:41.822322  522827 type.go:168] "Request Body" body=""
	I1217 20:30:41.822410  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:41.822673  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:42.322668  522827 type.go:168] "Request Body" body=""
	I1217 20:30:42.322753  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:42.323078  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:42.323134  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:42.822880  522827 type.go:168] "Request Body" body=""
	I1217 20:30:42.822964  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:42.823451  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:43.322210  522827 type.go:168] "Request Body" body=""
	I1217 20:30:43.322284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:43.322583  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:43.822235  522827 type.go:168] "Request Body" body=""
	I1217 20:30:43.822309  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:43.822654  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:44.322386  522827 type.go:168] "Request Body" body=""
	I1217 20:30:44.322461  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:44.322790  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:44.822318  522827 type.go:168] "Request Body" body=""
	I1217 20:30:44.822384  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:44.822682  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:44.822724  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:45.322416  522827 type.go:168] "Request Body" body=""
	I1217 20:30:45.322496  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:45.322829  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:45.822255  522827 type.go:168] "Request Body" body=""
	I1217 20:30:45.822332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:45.822669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:46.322325  522827 type.go:168] "Request Body" body=""
	I1217 20:30:46.322400  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:46.322665  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:46.822380  522827 type.go:168] "Request Body" body=""
	I1217 20:30:46.822473  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:46.822817  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:46.822872  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:47.322661  522827 type.go:168] "Request Body" body=""
	I1217 20:30:47.322735  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:47.323065  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:47.822781  522827 type.go:168] "Request Body" body=""
	I1217 20:30:47.822857  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:47.823119  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:48.322897  522827 type.go:168] "Request Body" body=""
	I1217 20:30:48.322974  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:48.323345  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:48.823144  522827 type.go:168] "Request Body" body=""
	I1217 20:30:48.823234  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:48.823560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:48.823640  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:49.322261  522827 type.go:168] "Request Body" body=""
	I1217 20:30:49.322332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:49.322595  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:49.822322  522827 type.go:168] "Request Body" body=""
	I1217 20:30:49.822426  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:49.822794  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:50.322509  522827 type.go:168] "Request Body" body=""
	I1217 20:30:50.322590  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:50.322932  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:50.822546  522827 type.go:168] "Request Body" body=""
	I1217 20:30:50.822615  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:50.822946  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:51.322643  522827 type.go:168] "Request Body" body=""
	I1217 20:30:51.322718  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:51.323024  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:51.323070  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:51.822296  522827 type.go:168] "Request Body" body=""
	I1217 20:30:51.822374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:51.822687  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:52.322694  522827 type.go:168] "Request Body" body=""
	I1217 20:30:52.322784  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:52.323124  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:52.823081  522827 type.go:168] "Request Body" body=""
	I1217 20:30:52.823156  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:52.823526  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:53.322244  522827 type.go:168] "Request Body" body=""
	I1217 20:30:53.322320  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:53.322655  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:53.822344  522827 type.go:168] "Request Body" body=""
	I1217 20:30:53.822418  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:53.822697  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:53.822743  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:54.322237  522827 type.go:168] "Request Body" body=""
	I1217 20:30:54.322316  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:54.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:54.822361  522827 type.go:168] "Request Body" body=""
	I1217 20:30:54.822444  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:54.822766  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:55.322219  522827 type.go:168] "Request Body" body=""
	I1217 20:30:55.322295  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:55.322552  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:55.774295  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:30:55.822774  522827 type.go:168] "Request Body" body=""
	I1217 20:30:55.822854  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:55.823178  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:55.823237  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:55.835665  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:30:55.835703  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:55.835722  522827 retry.go:31] will retry after 28.301795669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 20:30:56.322365  522827 type.go:168] "Request Body" body=""
	I1217 20:30:56.322444  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:56.322778  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:56.822439  522827 type.go:168] "Request Body" body=""
	I1217 20:30:56.822508  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:56.822820  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:57.322747  522827 type.go:168] "Request Body" body=""
	I1217 20:30:57.322819  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:57.323147  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:57.822918  522827 type.go:168] "Request Body" body=""
	I1217 20:30:57.822997  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:57.823341  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:30:57.823393  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:30:58.322987  522827 type.go:168] "Request Body" body=""
	I1217 20:30:58.323064  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:58.323342  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:58.823128  522827 type.go:168] "Request Body" body=""
	I1217 20:30:58.823221  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:58.823576  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:59.322297  522827 type.go:168] "Request Body" body=""
	I1217 20:30:59.322372  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:59.322714  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:30:59.822279  522827 type.go:168] "Request Body" body=""
	I1217 20:30:59.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:30:59.822614  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:00.322456  522827 type.go:168] "Request Body" body=""
	I1217 20:31:00.322571  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:00.322881  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:00.322948  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:00.822606  522827 type.go:168] "Request Body" body=""
	I1217 20:31:00.822685  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:00.823029  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:01.322805  522827 type.go:168] "Request Body" body=""
	I1217 20:31:01.322882  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:01.323144  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:01.822946  522827 type.go:168] "Request Body" body=""
	I1217 20:31:01.823024  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:01.823411  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:02.322195  522827 type.go:168] "Request Body" body=""
	I1217 20:31:02.322276  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:02.322604  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:02.822463  522827 type.go:168] "Request Body" body=""
	I1217 20:31:02.822531  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:02.822797  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:02.822839  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:03.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:31:03.322304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:03.322643  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:03.822222  522827 type.go:168] "Request Body" body=""
	I1217 20:31:03.822303  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:03.822665  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:04.322217  522827 type.go:168] "Request Body" body=""
	I1217 20:31:04.322297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:04.322674  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:04.822410  522827 type.go:168] "Request Body" body=""
	I1217 20:31:04.822489  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:04.822833  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:04.822889  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:05.322559  522827 type.go:168] "Request Body" body=""
	I1217 20:31:05.322643  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:05.323009  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:05.822714  522827 type.go:168] "Request Body" body=""
	I1217 20:31:05.822789  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:05.823090  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:06.322858  522827 type.go:168] "Request Body" body=""
	I1217 20:31:06.322935  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:06.323252  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:06.823001  522827 type.go:168] "Request Body" body=""
	I1217 20:31:06.823088  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:06.823427  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:06.823482  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:07.322676  522827 type.go:168] "Request Body" body=""
	I1217 20:31:07.322779  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:07.323088  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:07.822882  522827 type.go:168] "Request Body" body=""
	I1217 20:31:07.822978  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:07.823462  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:08.322191  522827 type.go:168] "Request Body" body=""
	I1217 20:31:08.322284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:08.322582  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:08.822182  522827 type.go:168] "Request Body" body=""
	I1217 20:31:08.822262  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:08.822524  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:09.091155  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:31:09.152330  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:31:09.155944  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:31:09.156044  522827 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 20:31:09.322225  522827 type.go:168] "Request Body" body=""
	I1217 20:31:09.322322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:09.322666  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:09.322722  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:09.822389  522827 type.go:168] "Request Body" body=""
	I1217 20:31:09.822485  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:09.822808  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:10.322485  522827 type.go:168] "Request Body" body=""
	I1217 20:31:10.322557  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:10.322813  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:10.822228  522827 type.go:168] "Request Body" body=""
	I1217 20:31:10.822305  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:10.822670  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:11.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:31:11.322323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:11.322659  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:11.822317  522827 type.go:168] "Request Body" body=""
	I1217 20:31:11.822395  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:11.822653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:11.822709  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:12.322704  522827 type.go:168] "Request Body" body=""
	I1217 20:31:12.322778  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:12.323076  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:12.822968  522827 type.go:168] "Request Body" body=""
	I1217 20:31:12.823050  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:12.823387  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:13.323001  522827 type.go:168] "Request Body" body=""
	I1217 20:31:13.323088  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:13.323368  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:13.823235  522827 type.go:168] "Request Body" body=""
	I1217 20:31:13.823315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:13.823670  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:13.823726  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:14.322222  522827 type.go:168] "Request Body" body=""
	I1217 20:31:14.322295  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:14.322647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:14.822221  522827 type.go:168] "Request Body" body=""
	I1217 20:31:14.822300  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:14.822581  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:15.322323  522827 type.go:168] "Request Body" body=""
	I1217 20:31:15.322403  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:15.322715  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:15.822407  522827 type.go:168] "Request Body" body=""
	I1217 20:31:15.822512  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:15.822811  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:16.322304  522827 type.go:168] "Request Body" body=""
	I1217 20:31:16.322385  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:16.322637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:16.322683  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:16.822297  522827 type.go:168] "Request Body" body=""
	I1217 20:31:16.822416  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:16.822748  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:17.322737  522827 type.go:168] "Request Body" body=""
	I1217 20:31:17.322810  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:17.323096  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:17.822837  522827 type.go:168] "Request Body" body=""
	I1217 20:31:17.822931  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:17.823257  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:18.323065  522827 type.go:168] "Request Body" body=""
	I1217 20:31:18.323140  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:18.323508  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:18.323570  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:18.822258  522827 type.go:168] "Request Body" body=""
	I1217 20:31:18.822342  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:18.822689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:19.322395  522827 type.go:168] "Request Body" body=""
	I1217 20:31:19.322475  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:19.322822  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:19.822246  522827 type.go:168] "Request Body" body=""
	I1217 20:31:19.822344  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:19.822647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:20.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:31:20.322363  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:20.322714  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:20.822389  522827 type.go:168] "Request Body" body=""
	I1217 20:31:20.822466  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:20.822785  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:20.822834  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:21.322233  522827 type.go:168] "Request Body" body=""
	I1217 20:31:21.322331  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:21.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:21.822347  522827 type.go:168] "Request Body" body=""
	I1217 20:31:21.822422  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:21.822747  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:22.322631  522827 type.go:168] "Request Body" body=""
	I1217 20:31:22.322703  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:22.322965  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:22.822936  522827 type.go:168] "Request Body" body=""
	I1217 20:31:22.823012  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:22.823323  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:22.823370  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:23.323099  522827 type.go:168] "Request Body" body=""
	I1217 20:31:23.323180  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:23.323479  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:23.822130  522827 type.go:168] "Request Body" body=""
	I1217 20:31:23.822204  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:23.822471  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:24.138134  522827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:31:24.201991  522827 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:31:24.202036  522827 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 20:31:24.202117  522827 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 20:31:24.205262  522827 out.go:179] * Enabled addons: 
	I1217 20:31:24.208903  522827 addons.go:530] duration metric: took 1m41.560475312s for enable addons: enabled=[]
	I1217 20:31:24.322240  522827 type.go:168] "Request Body" body=""
	I1217 20:31:24.322318  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:24.322668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:24.822384  522827 type.go:168] "Request Body" body=""
	I1217 20:31:24.822478  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:24.822815  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:25.322366  522827 type.go:168] "Request Body" body=""
	I1217 20:31:25.322441  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:25.322753  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:25.322800  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:25.822458  522827 type.go:168] "Request Body" body=""
	I1217 20:31:25.822532  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:25.822902  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:26.322508  522827 type.go:168] "Request Body" body=""
	I1217 20:31:26.322584  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:26.322912  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:26.822194  522827 type.go:168] "Request Body" body=""
	I1217 20:31:26.822272  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:26.822592  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:27.322423  522827 type.go:168] "Request Body" body=""
	I1217 20:31:27.322530  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:27.322841  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:27.322894  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:27.822547  522827 type.go:168] "Request Body" body=""
	I1217 20:31:27.822621  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:27.822984  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:28.322302  522827 type.go:168] "Request Body" body=""
	I1217 20:31:28.322385  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:28.322685  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:28.822382  522827 type.go:168] "Request Body" body=""
	I1217 20:31:28.822464  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:28.822833  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:29.322567  522827 type.go:168] "Request Body" body=""
	I1217 20:31:29.322643  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:29.322987  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:29.323043  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:29.822734  522827 type.go:168] "Request Body" body=""
	I1217 20:31:29.822807  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:29.823076  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:30.322834  522827 type.go:168] "Request Body" body=""
	I1217 20:31:30.322906  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:30.323262  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:30.823096  522827 type.go:168] "Request Body" body=""
	I1217 20:31:30.823184  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:30.823505  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:31.322243  522827 type.go:168] "Request Body" body=""
	I1217 20:31:31.322319  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:31.322606  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:31.822221  522827 type.go:168] "Request Body" body=""
	I1217 20:31:31.822295  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:31.822614  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:31.822668  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:32.322585  522827 type.go:168] "Request Body" body=""
	I1217 20:31:32.322665  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:32.322989  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:32.822991  522827 type.go:168] "Request Body" body=""
	I1217 20:31:32.823063  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:32.823325  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:33.323053  522827 type.go:168] "Request Body" body=""
	I1217 20:31:33.323151  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:33.323496  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:33.822867  522827 type.go:168] "Request Body" body=""
	I1217 20:31:33.822946  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:33.823324  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:33.823391  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:34.323215  522827 type.go:168] "Request Body" body=""
	I1217 20:31:34.323300  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:34.323630  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:34.822311  522827 type.go:168] "Request Body" body=""
	I1217 20:31:34.822386  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:34.822717  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:35.322215  522827 type.go:168] "Request Body" body=""
	I1217 20:31:35.322293  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:35.322668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:35.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:31:35.822284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:35.822539  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:36.322256  522827 type.go:168] "Request Body" body=""
	I1217 20:31:36.322344  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:36.322708  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:36.322778  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:36.822306  522827 type.go:168] "Request Body" body=""
	I1217 20:31:36.822387  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:36.822729  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:37.322707  522827 type.go:168] "Request Body" body=""
	I1217 20:31:37.322775  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:37.323029  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:37.822287  522827 type.go:168] "Request Body" body=""
	I1217 20:31:37.822373  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:37.823676  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 20:31:38.322400  522827 type.go:168] "Request Body" body=""
	I1217 20:31:38.322477  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:38.322802  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:38.322850  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:38.822462  522827 type.go:168] "Request Body" body=""
	I1217 20:31:38.822552  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:38.822813  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:39.322538  522827 type.go:168] "Request Body" body=""
	I1217 20:31:39.322613  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:39.322992  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:39.822813  522827 type.go:168] "Request Body" body=""
	I1217 20:31:39.822889  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:39.823220  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:40.322969  522827 type.go:168] "Request Body" body=""
	I1217 20:31:40.323049  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:40.323311  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:40.323365  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:40.823132  522827 type.go:168] "Request Body" body=""
	I1217 20:31:40.823213  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:40.823556  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:41.322295  522827 type.go:168] "Request Body" body=""
	I1217 20:31:41.322379  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:41.322741  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:41.822257  522827 type.go:168] "Request Body" body=""
	I1217 20:31:41.822325  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:41.822584  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:42.322236  522827 type.go:168] "Request Body" body=""
	I1217 20:31:42.322354  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:42.322714  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:42.822359  522827 type.go:168] "Request Body" body=""
	I1217 20:31:42.822447  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:42.822773  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:42.822824  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:43.322198  522827 type.go:168] "Request Body" body=""
	I1217 20:31:43.322269  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:43.322552  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:43.822223  522827 type.go:168] "Request Body" body=""
	I1217 20:31:43.822299  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:43.822646  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:44.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:31:44.322310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:44.322646  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:44.822277  522827 type.go:168] "Request Body" body=""
	I1217 20:31:44.822351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:44.822615  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:45.322247  522827 type.go:168] "Request Body" body=""
	I1217 20:31:45.322349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:45.322649  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:45.322699  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:45.822364  522827 type.go:168] "Request Body" body=""
	I1217 20:31:45.822446  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:45.822791  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:46.322336  522827 type.go:168] "Request Body" body=""
	I1217 20:31:46.322408  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:46.322712  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:46.822435  522827 type.go:168] "Request Body" body=""
	I1217 20:31:46.822522  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:46.822879  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:47.322808  522827 type.go:168] "Request Body" body=""
	I1217 20:31:47.322888  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:47.323217  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:47.323277  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:47.823026  522827 type.go:168] "Request Body" body=""
	I1217 20:31:47.823100  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:47.823372  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:48.323164  522827 type.go:168] "Request Body" body=""
	I1217 20:31:48.323244  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:48.323562  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:48.822251  522827 type.go:168] "Request Body" body=""
	I1217 20:31:48.822329  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:48.822696  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:49.322381  522827 type.go:168] "Request Body" body=""
	I1217 20:31:49.322457  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:49.322785  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:49.822503  522827 type.go:168] "Request Body" body=""
	I1217 20:31:49.822582  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:49.822896  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:49.822946  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:50.322276  522827 type.go:168] "Request Body" body=""
	I1217 20:31:50.322366  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:50.322737  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:50.822198  522827 type.go:168] "Request Body" body=""
	I1217 20:31:50.822270  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:50.822542  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:51.322250  522827 type.go:168] "Request Body" body=""
	I1217 20:31:51.322336  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:51.322688  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:51.822279  522827 type.go:168] "Request Body" body=""
	I1217 20:31:51.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:51.822647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:52.322172  522827 type.go:168] "Request Body" body=""
	I1217 20:31:52.322269  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:52.322529  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:52.322584  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:52.822307  522827 type.go:168] "Request Body" body=""
	I1217 20:31:52.822381  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:52.822703  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:53.322352  522827 type.go:168] "Request Body" body=""
	I1217 20:31:53.322443  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:53.322765  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:53.822450  522827 type.go:168] "Request Body" body=""
	I1217 20:31:53.822519  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:53.822836  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:54.322259  522827 type.go:168] "Request Body" body=""
	I1217 20:31:54.322342  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:54.322677  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:54.322737  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:54.822413  522827 type.go:168] "Request Body" body=""
	I1217 20:31:54.822500  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:54.822844  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:55.322509  522827 type.go:168] "Request Body" body=""
	I1217 20:31:55.322590  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:55.322859  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:55.822209  522827 type.go:168] "Request Body" body=""
	I1217 20:31:55.822286  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:55.822615  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:56.322334  522827 type.go:168] "Request Body" body=""
	I1217 20:31:56.322412  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:56.322700  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:56.822188  522827 type.go:168] "Request Body" body=""
	I1217 20:31:56.822256  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:56.822570  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:56.822617  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:57.322493  522827 type.go:168] "Request Body" body=""
	I1217 20:31:57.322571  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:57.322891  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:57.822474  522827 type.go:168] "Request Body" body=""
	I1217 20:31:57.822550  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:57.822881  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:58.322311  522827 type.go:168] "Request Body" body=""
	I1217 20:31:58.322386  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:58.322639  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:58.822249  522827 type.go:168] "Request Body" body=""
	I1217 20:31:58.822326  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:58.822659  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:31:58.822714  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:31:59.322242  522827 type.go:168] "Request Body" body=""
	I1217 20:31:59.322321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:59.322689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:31:59.822316  522827 type.go:168] "Request Body" body=""
	I1217 20:31:59.822386  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:31:59.822717  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:00.322382  522827 type.go:168] "Request Body" body=""
	I1217 20:32:00.322473  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:00.322812  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:00.822290  522827 type.go:168] "Request Body" body=""
	I1217 20:32:00.822374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:00.822752  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:00.822812  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:01.322354  522827 type.go:168] "Request Body" body=""
	I1217 20:32:01.322434  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:01.322743  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:01.822229  522827 type.go:168] "Request Body" body=""
	I1217 20:32:01.822312  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:01.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:02.322687  522827 type.go:168] "Request Body" body=""
	I1217 20:32:02.322779  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:02.323110  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:02.823078  522827 type.go:168] "Request Body" body=""
	I1217 20:32:02.823185  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:02.823454  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:02.823500  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:03.322198  522827 type.go:168] "Request Body" body=""
	I1217 20:32:03.322280  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:03.322619  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:03.822356  522827 type.go:168] "Request Body" body=""
	I1217 20:32:03.822434  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:03.822736  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:04.322315  522827 type.go:168] "Request Body" body=""
	I1217 20:32:04.322389  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:04.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:04.822260  522827 type.go:168] "Request Body" body=""
	I1217 20:32:04.822366  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:04.822762  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:05.322471  522827 type.go:168] "Request Body" body=""
	I1217 20:32:05.322560  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:05.322916  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:05.322977  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:05.822615  522827 type.go:168] "Request Body" body=""
	I1217 20:32:05.822691  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:05.823031  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:06.322818  522827 type.go:168] "Request Body" body=""
	I1217 20:32:06.322895  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:06.323223  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:06.822995  522827 type.go:168] "Request Body" body=""
	I1217 20:32:06.823069  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:06.823419  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:07.322171  522827 type.go:168] "Request Body" body=""
	I1217 20:32:07.322242  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:07.322555  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:07.822236  522827 type.go:168] "Request Body" body=""
	I1217 20:32:07.822316  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:07.822639  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:07.822694  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:08.322234  522827 type.go:168] "Request Body" body=""
	I1217 20:32:08.322313  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:08.322610  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:08.822290  522827 type.go:168] "Request Body" body=""
	I1217 20:32:08.822368  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:08.822630  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:09.322201  522827 type.go:168] "Request Body" body=""
	I1217 20:32:09.322283  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:09.322629  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:09.822331  522827 type.go:168] "Request Body" body=""
	I1217 20:32:09.822412  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:09.822739  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:09.822812  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:10.322224  522827 type.go:168] "Request Body" body=""
	I1217 20:32:10.322297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:10.322657  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:10.822387  522827 type.go:168] "Request Body" body=""
	I1217 20:32:10.822470  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:10.822875  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:11.322246  522827 type.go:168] "Request Body" body=""
	I1217 20:32:11.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:11.322696  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:11.822377  522827 type.go:168] "Request Body" body=""
	I1217 20:32:11.822447  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:11.822730  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:12.322684  522827 type.go:168] "Request Body" body=""
	I1217 20:32:12.322757  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:12.323075  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:12.323135  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:12.823123  522827 type.go:168] "Request Body" body=""
	I1217 20:32:12.823215  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:12.823567  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:13.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:32:13.322361  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:13.322653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:13.822251  522827 type.go:168] "Request Body" body=""
	I1217 20:32:13.822330  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:13.822693  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:14.322242  522827 type.go:168] "Request Body" body=""
	I1217 20:32:14.322324  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:14.322673  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:14.822355  522827 type.go:168] "Request Body" body=""
	I1217 20:32:14.822428  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:14.822689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:14.822736  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:15.322245  522827 type.go:168] "Request Body" body=""
	I1217 20:32:15.322322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:15.322646  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:15.822224  522827 type.go:168] "Request Body" body=""
	I1217 20:32:15.822301  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:15.822625  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:16.322176  522827 type.go:168] "Request Body" body=""
	I1217 20:32:16.322257  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:16.322573  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:16.822265  522827 type.go:168] "Request Body" body=""
	I1217 20:32:16.822341  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:16.822656  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:17.322600  522827 type.go:168] "Request Body" body=""
	I1217 20:32:17.322693  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:17.323051  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:17.323108  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:17.822821  522827 type.go:168] "Request Body" body=""
	I1217 20:32:17.822890  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:17.823195  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:18.322987  522827 type.go:168] "Request Body" body=""
	I1217 20:32:18.323062  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:18.323387  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:18.823193  522827 type.go:168] "Request Body" body=""
	I1217 20:32:18.823271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:18.823632  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:19.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:32:19.322300  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:19.322563  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:19.822248  522827 type.go:168] "Request Body" body=""
	I1217 20:32:19.822332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:19.822689  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:19.822743  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:20.322270  522827 type.go:168] "Request Body" body=""
	I1217 20:32:20.322351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:20.322706  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:20.822403  522827 type.go:168] "Request Body" body=""
	I1217 20:32:20.822483  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:20.822759  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:21.322436  522827 type.go:168] "Request Body" body=""
	I1217 20:32:21.322518  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:21.322864  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:21.822578  522827 type.go:168] "Request Body" body=""
	I1217 20:32:21.822655  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:21.823020  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:21.823078  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:22.322774  522827 type.go:168] "Request Body" body=""
	I1217 20:32:22.322847  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:22.323116  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:22.823126  522827 type.go:168] "Request Body" body=""
	I1217 20:32:22.823213  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:22.823625  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:23.322331  522827 type.go:168] "Request Body" body=""
	I1217 20:32:23.322407  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:23.322751  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:23.822449  522827 type.go:168] "Request Body" body=""
	I1217 20:32:23.822519  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:23.822856  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:24.322228  522827 type.go:168] "Request Body" body=""
	I1217 20:32:24.322310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:24.322653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:24.322710  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:24.822249  522827 type.go:168] "Request Body" body=""
	I1217 20:32:24.822326  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:24.822711  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:25.322197  522827 type.go:168] "Request Body" body=""
	I1217 20:32:25.322269  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:25.322562  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:25.822261  522827 type.go:168] "Request Body" body=""
	I1217 20:32:25.822338  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:25.822615  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:26.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:32:26.322347  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:26.322669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:26.822294  522827 type.go:168] "Request Body" body=""
	I1217 20:32:26.822375  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:26.822659  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:26.822711  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:27.322690  522827 type.go:168] "Request Body" body=""
	I1217 20:32:27.322770  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:27.323105  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:27.822647  522827 type.go:168] "Request Body" body=""
	I1217 20:32:27.822726  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:27.823033  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:28.322766  522827 type.go:168] "Request Body" body=""
	I1217 20:32:28.322834  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:28.323196  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:28.822977  522827 type.go:168] "Request Body" body=""
	I1217 20:32:28.823055  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:28.823384  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:28.823437  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:29.322124  522827 type.go:168] "Request Body" body=""
	I1217 20:32:29.322205  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:29.322530  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:29.822227  522827 type.go:168] "Request Body" body=""
	I1217 20:32:29.822306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:29.822567  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:30.322239  522827 type.go:168] "Request Body" body=""
	I1217 20:32:30.322320  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:30.322615  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:30.822251  522827 type.go:168] "Request Body" body=""
	I1217 20:32:30.822335  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:30.822684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:31.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:32:31.322297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:31.322575  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:31.322631  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:31.822242  522827 type.go:168] "Request Body" body=""
	I1217 20:32:31.822318  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:31.822645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:32.322646  522827 type.go:168] "Request Body" body=""
	I1217 20:32:32.322717  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:32.323066  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:32.822921  522827 type.go:168] "Request Body" body=""
	I1217 20:32:32.822993  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:32.823283  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:33.323063  522827 type.go:168] "Request Body" body=""
	I1217 20:32:33.323158  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:33.323500  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:33.323569  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:33.822260  522827 type.go:168] "Request Body" body=""
	I1217 20:32:33.822354  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:33.822685  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:34.322405  522827 type.go:168] "Request Body" body=""
	I1217 20:32:34.322478  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:34.322748  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:34.822278  522827 type.go:168] "Request Body" body=""
	I1217 20:32:34.822374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:34.822748  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:35.322476  522827 type.go:168] "Request Body" body=""
	I1217 20:32:35.322570  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:35.322893  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:35.822173  522827 type.go:168] "Request Body" body=""
	I1217 20:32:35.822243  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:35.822502  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:35.822542  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:36.322264  522827 type.go:168] "Request Body" body=""
	I1217 20:32:36.322345  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:36.322701  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:36.822410  522827 type.go:168] "Request Body" body=""
	I1217 20:32:36.822488  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:36.822823  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:37.322668  522827 type.go:168] "Request Body" body=""
	I1217 20:32:37.322737  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:37.322989  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:37.822848  522827 type.go:168] "Request Body" body=""
	I1217 20:32:37.822924  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:37.823275  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:37.823343  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:38.323095  522827 type.go:168] "Request Body" body=""
	I1217 20:32:38.323188  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:38.323541  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:38.822238  522827 type.go:168] "Request Body" body=""
	I1217 20:32:38.822315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:38.822608  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:39.322245  522827 type.go:168] "Request Body" body=""
	I1217 20:32:39.322381  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:39.322729  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:39.822442  522827 type.go:168] "Request Body" body=""
	I1217 20:32:39.822521  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:39.822851  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:40.322537  522827 type.go:168] "Request Body" body=""
	I1217 20:32:40.322611  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:40.322918  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:40.322971  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:40.822252  522827 type.go:168] "Request Body" body=""
	I1217 20:32:40.822327  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:40.822657  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:41.322382  522827 type.go:168] "Request Body" body=""
	I1217 20:32:41.322458  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:41.322791  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:41.822307  522827 type.go:168] "Request Body" body=""
	I1217 20:32:41.822377  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:41.822665  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:42.322693  522827 type.go:168] "Request Body" body=""
	I1217 20:32:42.322766  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:42.323102  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:42.323170  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:42.823022  522827 type.go:168] "Request Body" body=""
	I1217 20:32:42.823123  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:42.823479  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:43.322175  522827 type.go:168] "Request Body" body=""
	I1217 20:32:43.322271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:43.322523  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:43.822319  522827 type.go:168] "Request Body" body=""
	I1217 20:32:43.822415  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:43.822789  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:44.322263  522827 type.go:168] "Request Body" body=""
	I1217 20:32:44.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:44.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:44.822216  522827 type.go:168] "Request Body" body=""
	I1217 20:32:44.822287  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:44.822560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:44.822601  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:45.322398  522827 type.go:168] "Request Body" body=""
	I1217 20:32:45.322606  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:45.323117  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:45.823034  522827 type.go:168] "Request Body" body=""
	I1217 20:32:45.823140  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:45.823517  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:46.322220  522827 type.go:168] "Request Body" body=""
	I1217 20:32:46.322304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:46.322623  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:46.822261  522827 type.go:168] "Request Body" body=""
	I1217 20:32:46.822341  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:46.822687  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:46.822747  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:47.322536  522827 type.go:168] "Request Body" body=""
	I1217 20:32:47.322612  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:47.322939  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:47.822456  522827 type.go:168] "Request Body" body=""
	I1217 20:32:47.822529  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:47.822784  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:48.322237  522827 type.go:168] "Request Body" body=""
	I1217 20:32:48.322319  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:48.322675  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:48.822389  522827 type.go:168] "Request Body" body=""
	I1217 20:32:48.822473  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:48.822819  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:48.822885  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:49.322495  522827 type.go:168] "Request Body" body=""
	I1217 20:32:49.322569  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:49.322865  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:49.822558  522827 type.go:168] "Request Body" body=""
	I1217 20:32:49.822637  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:49.822970  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:50.322764  522827 type.go:168] "Request Body" body=""
	I1217 20:32:50.322842  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:50.323193  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:50.822930  522827 type.go:168] "Request Body" body=""
	I1217 20:32:50.823006  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:50.823301  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:50.823453  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:51.322133  522827 type.go:168] "Request Body" body=""
	I1217 20:32:51.322212  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:51.322566  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:51.822287  522827 type.go:168] "Request Body" body=""
	I1217 20:32:51.822362  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:51.822679  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:52.322645  522827 type.go:168] "Request Body" body=""
	I1217 20:32:52.322727  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:52.323054  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:52.823092  522827 type.go:168] "Request Body" body=""
	I1217 20:32:52.823172  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:52.823505  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:52.823559  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:53.322267  522827 type.go:168] "Request Body" body=""
	I1217 20:32:53.322354  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:53.322691  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:53.822229  522827 type.go:168] "Request Body" body=""
	I1217 20:32:53.822299  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:53.822601  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:54.322252  522827 type.go:168] "Request Body" body=""
	I1217 20:32:54.322338  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:54.322639  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:54.822220  522827 type.go:168] "Request Body" body=""
	I1217 20:32:54.822304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:54.822635  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:55.322307  522827 type.go:168] "Request Body" body=""
	I1217 20:32:55.322374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:55.322666  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:55.322723  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:55.822406  522827 type.go:168] "Request Body" body=""
	I1217 20:32:55.822481  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:55.822818  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:56.322513  522827 type.go:168] "Request Body" body=""
	I1217 20:32:56.322588  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:56.322929  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:56.822610  522827 type.go:168] "Request Body" body=""
	I1217 20:32:56.822683  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:56.823008  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:57.322760  522827 type.go:168] "Request Body" body=""
	I1217 20:32:57.322844  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:57.323218  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:57.323276  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:32:57.823035  522827 type.go:168] "Request Body" body=""
	I1217 20:32:57.823125  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:57.823456  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:58.322253  522827 type.go:168] "Request Body" body=""
	I1217 20:32:58.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:58.322631  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:58.822231  522827 type.go:168] "Request Body" body=""
	I1217 20:32:58.822313  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:58.822643  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:59.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:32:59.322307  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:59.322642  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:32:59.822186  522827 type.go:168] "Request Body" body=""
	I1217 20:32:59.822256  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:32:59.822567  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:32:59.822624  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:00.322321  522827 type.go:168] "Request Body" body=""
	I1217 20:33:00.322425  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:00.322741  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:00.822652  522827 type.go:168] "Request Body" body=""
	I1217 20:33:00.822731  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:00.823058  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:01.322828  522827 type.go:168] "Request Body" body=""
	I1217 20:33:01.322902  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:01.323234  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:01.823025  522827 type.go:168] "Request Body" body=""
	I1217 20:33:01.823111  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:01.823448  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:01.823507  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:02.322504  522827 type.go:168] "Request Body" body=""
	I1217 20:33:02.322584  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:02.322930  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:02.822578  522827 type.go:168] "Request Body" body=""
	I1217 20:33:02.822653  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:02.822924  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:03.322752  522827 type.go:168] "Request Body" body=""
	I1217 20:33:03.322834  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:03.323161  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:03.822980  522827 type.go:168] "Request Body" body=""
	I1217 20:33:03.823059  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:03.823424  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:04.322126  522827 type.go:168] "Request Body" body=""
	I1217 20:33:04.322197  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:04.322455  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:04.322500  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:04.822206  522827 type.go:168] "Request Body" body=""
	I1217 20:33:04.822286  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:04.822623  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:05.322331  522827 type.go:168] "Request Body" body=""
	I1217 20:33:05.322416  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:05.322767  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:05.822465  522827 type.go:168] "Request Body" body=""
	I1217 20:33:05.822544  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:05.822897  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:06.322236  522827 type.go:168] "Request Body" body=""
	I1217 20:33:06.322318  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:06.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:06.322719  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:06.822389  522827 type.go:168] "Request Body" body=""
	I1217 20:33:06.822469  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:06.822803  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:07.322597  522827 type.go:168] "Request Body" body=""
	I1217 20:33:07.322665  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:07.322926  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:07.822204  522827 type.go:168] "Request Body" body=""
	I1217 20:33:07.822282  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:07.822625  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:08.322315  522827 type.go:168] "Request Body" body=""
	I1217 20:33:08.322394  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:08.322734  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:08.322788  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:08.822200  522827 type.go:168] "Request Body" body=""
	I1217 20:33:08.822275  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:08.822538  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:09.322257  522827 type.go:168] "Request Body" body=""
	I1217 20:33:09.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:09.322703  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:09.822418  522827 type.go:168] "Request Body" body=""
	I1217 20:33:09.822497  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:09.822851  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:10.322301  522827 type.go:168] "Request Body" body=""
	I1217 20:33:10.322371  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:10.322635  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:10.822269  522827 type.go:168] "Request Body" body=""
	I1217 20:33:10.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:10.822626  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:10.822672  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:11.322251  522827 type.go:168] "Request Body" body=""
	I1217 20:33:11.322332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:11.322653  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:11.822193  522827 type.go:168] "Request Body" body=""
	I1217 20:33:11.822295  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:11.822606  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:12.322610  522827 type.go:168] "Request Body" body=""
	I1217 20:33:12.322688  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:12.323024  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:12.822814  522827 type.go:168] "Request Body" body=""
	I1217 20:33:12.822898  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:12.823229  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:12.823291  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:13.323028  522827 type.go:168] "Request Body" body=""
	I1217 20:33:13.323108  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:13.323382  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:13.823191  522827 type.go:168] "Request Body" body=""
	I1217 20:33:13.823271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:13.823643  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:14.322366  522827 type.go:168] "Request Body" body=""
	I1217 20:33:14.322445  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:14.322788  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:14.822460  522827 type.go:168] "Request Body" body=""
	I1217 20:33:14.822538  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:14.822850  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:15.322243  522827 type.go:168] "Request Body" body=""
	I1217 20:33:15.322321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:15.322677  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:15.322736  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:15.822256  522827 type.go:168] "Request Body" body=""
	I1217 20:33:15.822335  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:15.822688  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:16.322376  522827 type.go:168] "Request Body" body=""
	I1217 20:33:16.322452  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:16.322776  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:16.822223  522827 type.go:168] "Request Body" body=""
	I1217 20:33:16.822299  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:16.822647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:17.322462  522827 type.go:168] "Request Body" body=""
	I1217 20:33:17.322538  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:17.322921  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:17.322982  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:17.822190  522827 type.go:168] "Request Body" body=""
	I1217 20:33:17.822267  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:17.822594  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:18.322239  522827 type.go:168] "Request Body" body=""
	I1217 20:33:18.322318  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:18.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:18.822360  522827 type.go:168] "Request Body" body=""
	I1217 20:33:18.822447  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:18.822810  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:19.322194  522827 type.go:168] "Request Body" body=""
	I1217 20:33:19.322274  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:19.322540  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:19.822224  522827 type.go:168] "Request Body" body=""
	I1217 20:33:19.822303  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:19.822648  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:19.822702  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:20.322363  522827 type.go:168] "Request Body" body=""
	I1217 20:33:20.322440  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:20.322810  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:20.822186  522827 type.go:168] "Request Body" body=""
	I1217 20:33:20.822289  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:20.822610  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:21.322209  522827 type.go:168] "Request Body" body=""
	I1217 20:33:21.322284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:21.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:21.822355  522827 type.go:168] "Request Body" body=""
	I1217 20:33:21.822454  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:21.822796  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:21.822847  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:22.322602  522827 type.go:168] "Request Body" body=""
	I1217 20:33:22.322708  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:22.322975  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:22.823014  522827 type.go:168] "Request Body" body=""
	I1217 20:33:22.823104  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:22.823484  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:23.322227  522827 type.go:168] "Request Body" body=""
	I1217 20:33:23.322304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:23.322655  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:23.822334  522827 type.go:168] "Request Body" body=""
	I1217 20:33:23.822402  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:23.822683  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:24.322240  522827 type.go:168] "Request Body" body=""
	I1217 20:33:24.322323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:24.322616  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:24.322662  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:24.822300  522827 type.go:168] "Request Body" body=""
	I1217 20:33:24.822380  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:24.822709  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:25.322192  522827 type.go:168] "Request Body" body=""
	I1217 20:33:25.322263  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:25.322513  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:25.822234  522827 type.go:168] "Request Body" body=""
	I1217 20:33:25.822315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:25.822664  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:26.322370  522827 type.go:168] "Request Body" body=""
	I1217 20:33:26.322443  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:26.322762  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:26.322816  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:26.822197  522827 type.go:168] "Request Body" body=""
	I1217 20:33:26.822271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:26.822589  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:27.322602  522827 type.go:168] "Request Body" body=""
	I1217 20:33:27.322684  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:27.323034  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:27.822840  522827 type.go:168] "Request Body" body=""
	I1217 20:33:27.822919  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:27.823295  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:28.323025  522827 type.go:168] "Request Body" body=""
	I1217 20:33:28.323101  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:28.323352  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:28.323391  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:28.823128  522827 type.go:168] "Request Body" body=""
	I1217 20:33:28.823210  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:28.823616  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:29.322300  522827 type.go:168] "Request Body" body=""
	I1217 20:33:29.322374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:29.322713  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:29.822206  522827 type.go:168] "Request Body" body=""
	I1217 20:33:29.822276  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:29.822536  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:30.322280  522827 type.go:168] "Request Body" body=""
	I1217 20:33:30.322356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:30.322680  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:30.822251  522827 type.go:168] "Request Body" body=""
	I1217 20:33:30.822327  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:30.822668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:30.822720  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:31.322220  522827 type.go:168] "Request Body" body=""
	I1217 20:33:31.322288  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:31.322537  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:31.822223  522827 type.go:168] "Request Body" body=""
	I1217 20:33:31.822304  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:31.822622  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:32.322649  522827 type.go:168] "Request Body" body=""
	I1217 20:33:32.322726  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:32.323059  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:32.822867  522827 type.go:168] "Request Body" body=""
	I1217 20:33:32.822952  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:32.823248  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:32.823290  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:33.323108  522827 type.go:168] "Request Body" body=""
	I1217 20:33:33.323186  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:33.323537  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:33.822245  522827 type.go:168] "Request Body" body=""
	I1217 20:33:33.822321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:33.822647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:34.322173  522827 type.go:168] "Request Body" body=""
	I1217 20:33:34.322244  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:34.322543  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:34.822227  522827 type.go:168] "Request Body" body=""
	I1217 20:33:34.822310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:34.822637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:35.322393  522827 type.go:168] "Request Body" body=""
	I1217 20:33:35.322479  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:35.322809  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:35.322867  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:35.822191  522827 type.go:168] "Request Body" body=""
	I1217 20:33:35.822262  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:35.822572  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:36.322306  522827 type.go:168] "Request Body" body=""
	I1217 20:33:36.322382  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:36.322717  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:36.822442  522827 type.go:168] "Request Body" body=""
	I1217 20:33:36.822520  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:36.822854  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:37.322749  522827 type.go:168] "Request Body" body=""
	I1217 20:33:37.322816  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:37.323098  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:37.323140  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:37.822974  522827 type.go:168] "Request Body" body=""
	I1217 20:33:37.823045  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:37.823647  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:38.322337  522827 type.go:168] "Request Body" body=""
	I1217 20:33:38.322414  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:38.322731  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:38.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:33:38.822273  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:38.822622  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:39.322260  522827 type.go:168] "Request Body" body=""
	I1217 20:33:39.322339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:39.322710  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:39.822274  522827 type.go:168] "Request Body" body=""
	I1217 20:33:39.822350  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:39.822691  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:39.822743  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:40.322386  522827 type.go:168] "Request Body" body=""
	I1217 20:33:40.322461  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:40.322777  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:40.822263  522827 type.go:168] "Request Body" body=""
	I1217 20:33:40.822339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:40.822619  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:41.322249  522827 type.go:168] "Request Body" body=""
	I1217 20:33:41.322340  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:41.322697  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:41.822374  522827 type.go:168] "Request Body" body=""
	I1217 20:33:41.822453  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:41.822786  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:41.822845  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:42.322518  522827 type.go:168] "Request Body" body=""
	I1217 20:33:42.322620  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:42.323128  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:42.823194  522827 type.go:168] "Request Body" body=""
	I1217 20:33:42.823280  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:42.823645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:43.322176  522827 type.go:168] "Request Body" body=""
	I1217 20:33:43.322242  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:43.322490  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:43.822197  522827 type.go:168] "Request Body" body=""
	I1217 20:33:43.822292  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:43.822663  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:44.322229  522827 type.go:168] "Request Body" body=""
	I1217 20:33:44.322308  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:44.322678  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:44.322735  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:44.822242  522827 type.go:168] "Request Body" body=""
	I1217 20:33:44.822312  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:44.822572  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:45.322246  522827 type.go:168] "Request Body" body=""
	I1217 20:33:45.322321  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:45.322658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:45.822380  522827 type.go:168] "Request Body" body=""
	I1217 20:33:45.822458  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:45.822809  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:46.322495  522827 type.go:168] "Request Body" body=""
	I1217 20:33:46.322574  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:46.322896  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:46.322955  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:46.822620  522827 type.go:168] "Request Body" body=""
	I1217 20:33:46.822697  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:46.823021  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:47.322811  522827 type.go:168] "Request Body" body=""
	I1217 20:33:47.322892  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:47.323256  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:47.823109  522827 type.go:168] "Request Body" body=""
	I1217 20:33:47.823190  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:47.823487  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:48.322186  522827 type.go:168] "Request Body" body=""
	I1217 20:33:48.322263  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:48.322612  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:48.822323  522827 type.go:168] "Request Body" body=""
	I1217 20:33:48.822399  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:48.822726  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:48.822794  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:49.322196  522827 type.go:168] "Request Body" body=""
	I1217 20:33:49.322273  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:49.322588  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:49.822263  522827 type.go:168] "Request Body" body=""
	I1217 20:33:49.822348  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:49.822724  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:50.322473  522827 type.go:168] "Request Body" body=""
	I1217 20:33:50.322557  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:50.322925  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:50.822209  522827 type.go:168] "Request Body" body=""
	I1217 20:33:50.822284  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:50.822560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:51.322238  522827 type.go:168] "Request Body" body=""
	I1217 20:33:51.322322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:51.322661  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:51.322714  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:51.822385  522827 type.go:168] "Request Body" body=""
	I1217 20:33:51.822483  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:51.822831  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:52.322696  522827 type.go:168] "Request Body" body=""
	I1217 20:33:52.322769  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:52.323046  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:52.823035  522827 type.go:168] "Request Body" body=""
	I1217 20:33:52.823114  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:52.823430  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:53.322170  522827 type.go:168] "Request Body" body=""
	I1217 20:33:53.322245  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:53.322568  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:53.822148  522827 type.go:168] "Request Body" body=""
	I1217 20:33:53.822225  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:53.822487  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:53.822527  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:54.322252  522827 type.go:168] "Request Body" body=""
	I1217 20:33:54.322346  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:54.322676  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:54.822391  522827 type.go:168] "Request Body" body=""
	I1217 20:33:54.822487  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:54.822807  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:55.322470  522827 type.go:168] "Request Body" body=""
	I1217 20:33:55.322551  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:55.322876  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:55.822280  522827 type.go:168] "Request Body" body=""
	I1217 20:33:55.822364  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:55.822753  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:55.822813  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:56.322272  522827 type.go:168] "Request Body" body=""
	I1217 20:33:56.322351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:56.322670  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:56.822314  522827 type.go:168] "Request Body" body=""
	I1217 20:33:56.822391  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:56.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:57.322710  522827 type.go:168] "Request Body" body=""
	I1217 20:33:57.322780  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:57.323117  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:57.822916  522827 type.go:168] "Request Body" body=""
	I1217 20:33:57.823001  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:57.823366  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:33:57.823421  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:33:58.323148  522827 type.go:168] "Request Body" body=""
	I1217 20:33:58.323218  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:58.323513  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:58.822212  522827 type.go:168] "Request Body" body=""
	I1217 20:33:58.822296  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:58.822656  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:59.322223  522827 type.go:168] "Request Body" body=""
	I1217 20:33:59.322305  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:59.322651  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:33:59.822221  522827 type.go:168] "Request Body" body=""
	I1217 20:33:59.822297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:33:59.822641  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:00.322298  522827 type.go:168] "Request Body" body=""
	I1217 20:34:00.322392  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:00.322726  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:00.322782  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:00.822577  522827 type.go:168] "Request Body" body=""
	I1217 20:34:00.822662  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:00.823038  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:01.322657  522827 type.go:168] "Request Body" body=""
	I1217 20:34:01.322731  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:01.323025  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:01.822880  522827 type.go:168] "Request Body" body=""
	I1217 20:34:01.822955  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:01.823320  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:02.323040  522827 type.go:168] "Request Body" body=""
	I1217 20:34:02.323124  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:02.323461  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:02.323514  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:02.822183  522827 type.go:168] "Request Body" body=""
	I1217 20:34:02.822254  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:02.822522  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:03.322245  522827 type.go:168] "Request Body" body=""
	I1217 20:34:03.322323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:03.322656  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:03.822270  522827 type.go:168] "Request Body" body=""
	I1217 20:34:03.822349  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:03.822703  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:04.322271  522827 type.go:168] "Request Body" body=""
	I1217 20:34:04.322351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:04.322622  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:04.822245  522827 type.go:168] "Request Body" body=""
	I1217 20:34:04.822344  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:04.822655  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:04.822707  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:05.322405  522827 type.go:168] "Request Body" body=""
	I1217 20:34:05.322482  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:05.322821  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:05.822296  522827 type.go:168] "Request Body" body=""
	I1217 20:34:05.822365  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:05.822641  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:06.322257  522827 type.go:168] "Request Body" body=""
	I1217 20:34:06.322357  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:06.322688  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:06.822277  522827 type.go:168] "Request Body" body=""
	I1217 20:34:06.822353  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:06.822641  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:07.322615  522827 type.go:168] "Request Body" body=""
	I1217 20:34:07.322701  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:07.322998  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:07.323048  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:07.822861  522827 type.go:168] "Request Body" body=""
	I1217 20:34:07.822938  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:07.823293  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:08.323117  522827 type.go:168] "Request Body" body=""
	I1217 20:34:08.323193  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:08.323537  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:08.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:34:08.822303  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:08.822638  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:09.322217  522827 type.go:168] "Request Body" body=""
	I1217 20:34:09.322290  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:09.322637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:09.822237  522827 type.go:168] "Request Body" body=""
	I1217 20:34:09.822317  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:09.822642  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:09.822697  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:10.322196  522827 type.go:168] "Request Body" body=""
	I1217 20:34:10.322271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:10.322575  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:10.822218  522827 type.go:168] "Request Body" body=""
	I1217 20:34:10.822302  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:10.822644  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:11.322351  522827 type.go:168] "Request Body" body=""
	I1217 20:34:11.322431  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:11.322804  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:11.822287  522827 type.go:168] "Request Body" body=""
	I1217 20:34:11.822357  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:11.822618  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:12.322611  522827 type.go:168] "Request Body" body=""
	I1217 20:34:12.322687  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:12.323025  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:12.323091  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:12.822902  522827 type.go:168] "Request Body" body=""
	I1217 20:34:12.822982  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:12.823336  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:13.323079  522827 type.go:168] "Request Body" body=""
	I1217 20:34:13.323153  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:13.323408  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:13.822161  522827 type.go:168] "Request Body" body=""
	I1217 20:34:13.822240  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:13.822575  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:14.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:34:14.322308  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:14.322650  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:14.822224  522827 type.go:168] "Request Body" body=""
	I1217 20:34:14.822298  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:14.822572  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:14.822622  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:15.322292  522827 type.go:168] "Request Body" body=""
	I1217 20:34:15.322381  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:15.322726  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:15.822430  522827 type.go:168] "Request Body" body=""
	I1217 20:34:15.822518  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:15.822853  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:16.322471  522827 type.go:168] "Request Body" body=""
	I1217 20:34:16.322546  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:16.322836  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:16.822523  522827 type.go:168] "Request Body" body=""
	I1217 20:34:16.822605  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:16.822901  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:16.822951  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:17.322790  522827 type.go:168] "Request Body" body=""
	I1217 20:34:17.322869  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:17.323207  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:17.822955  522827 type.go:168] "Request Body" body=""
	I1217 20:34:17.823029  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:17.823314  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:18.323135  522827 type.go:168] "Request Body" body=""
	I1217 20:34:18.323209  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:18.323507  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:18.822255  522827 type.go:168] "Request Body" body=""
	I1217 20:34:18.822334  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:18.822699  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:19.322387  522827 type.go:168] "Request Body" body=""
	I1217 20:34:19.322457  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:19.322762  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:19.322824  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:19.822236  522827 type.go:168] "Request Body" body=""
	I1217 20:34:19.822309  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:19.822629  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:20.322246  522827 type.go:168] "Request Body" body=""
	I1217 20:34:20.322329  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:20.322668  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:20.822198  522827 type.go:168] "Request Body" body=""
	I1217 20:34:20.822275  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:20.822590  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:21.322284  522827 type.go:168] "Request Body" body=""
	I1217 20:34:21.322362  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:21.322710  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:21.822303  522827 type.go:168] "Request Body" body=""
	I1217 20:34:21.822382  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:21.822717  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:21.822772  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:22.322546  522827 type.go:168] "Request Body" body=""
	I1217 20:34:22.322615  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:22.322869  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:22.822850  522827 type.go:168] "Request Body" body=""
	I1217 20:34:22.822926  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:22.823275  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:23.323068  522827 type.go:168] "Request Body" body=""
	I1217 20:34:23.323142  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:23.323472  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:23.822173  522827 type.go:168] "Request Body" body=""
	I1217 20:34:23.822252  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:23.822565  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:24.322250  522827 type.go:168] "Request Body" body=""
	I1217 20:34:24.322333  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:24.322684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:24.322736  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:24.822303  522827 type.go:168] "Request Body" body=""
	I1217 20:34:24.822394  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:24.822738  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:25.322430  522827 type.go:168] "Request Body" body=""
	I1217 20:34:25.322506  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:25.322760  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:25.822245  522827 type.go:168] "Request Body" body=""
	I1217 20:34:25.822324  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:25.822671  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:26.322262  522827 type.go:168] "Request Body" body=""
	I1217 20:34:26.322344  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:26.322685  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:26.822350  522827 type.go:168] "Request Body" body=""
	I1217 20:34:26.822425  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:26.822723  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:26.822775  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:27.322731  522827 type.go:168] "Request Body" body=""
	I1217 20:34:27.322805  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:27.323135  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:27.822789  522827 type.go:168] "Request Body" body=""
	I1217 20:34:27.822869  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:27.823223  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:28.323014  522827 type.go:168] "Request Body" body=""
	I1217 20:34:28.323092  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:28.323358  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:28.823134  522827 type.go:168] "Request Body" body=""
	I1217 20:34:28.823222  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:28.823569  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:28.823650  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:29.322221  522827 type.go:168] "Request Body" body=""
	I1217 20:34:29.322302  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:29.322620  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:29.822198  522827 type.go:168] "Request Body" body=""
	I1217 20:34:29.822278  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:29.822544  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:30.322232  522827 type.go:168] "Request Body" body=""
	I1217 20:34:30.322310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:30.322633  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:30.822346  522827 type.go:168] "Request Body" body=""
	I1217 20:34:30.822427  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:30.822767  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:31.322434  522827 type.go:168] "Request Body" body=""
	I1217 20:34:31.322509  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:31.322812  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:31.322864  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:31.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:34:31.822308  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:31.822637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:32.322630  522827 type.go:168] "Request Body" body=""
	I1217 20:34:32.322703  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:32.323039  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:32.822905  522827 type.go:168] "Request Body" body=""
	I1217 20:34:32.822987  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:32.823335  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:33.323139  522827 type.go:168] "Request Body" body=""
	I1217 20:34:33.323215  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:33.323560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:33.323633  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:33.822242  522827 type.go:168] "Request Body" body=""
	I1217 20:34:33.822322  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:33.822657  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:34.322213  522827 type.go:168] "Request Body" body=""
	I1217 20:34:34.322306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:34.322645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:34.822274  522827 type.go:168] "Request Body" body=""
	I1217 20:34:34.822351  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:34.822676  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:35.322402  522827 type.go:168] "Request Body" body=""
	I1217 20:34:35.322487  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:35.322839  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:35.822515  522827 type.go:168] "Request Body" body=""
	I1217 20:34:35.822590  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:35.822930  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:35.822983  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:36.322255  522827 type.go:168] "Request Body" body=""
	I1217 20:34:36.322336  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:36.322707  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:36.822275  522827 type.go:168] "Request Body" body=""
	I1217 20:34:36.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:36.822697  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:37.322527  522827 type.go:168] "Request Body" body=""
	I1217 20:34:37.322599  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:37.322871  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:37.822227  522827 type.go:168] "Request Body" body=""
	I1217 20:34:37.822302  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:37.822644  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:38.322240  522827 type.go:168] "Request Body" body=""
	I1217 20:34:38.322315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:38.322686  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:38.322744  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:38.822377  522827 type.go:168] "Request Body" body=""
	I1217 20:34:38.822445  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:38.822700  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:39.322353  522827 type.go:168] "Request Body" body=""
	I1217 20:34:39.322436  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:39.322776  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:39.822486  522827 type.go:168] "Request Body" body=""
	I1217 20:34:39.822576  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:39.822923  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:40.322215  522827 type.go:168] "Request Body" body=""
	I1217 20:34:40.322285  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:40.322627  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:40.822320  522827 type.go:168] "Request Body" body=""
	I1217 20:34:40.822392  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:40.822751  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:40.822813  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:41.322501  522827 type.go:168] "Request Body" body=""
	I1217 20:34:41.322580  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:41.322864  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:41.822296  522827 type.go:168] "Request Body" body=""
	I1217 20:34:41.822379  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:41.822641  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:42.322620  522827 type.go:168] "Request Body" body=""
	I1217 20:34:42.322699  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:42.323049  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:42.822854  522827 type.go:168] "Request Body" body=""
	I1217 20:34:42.822937  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:42.823298  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:42.823352  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:43.322922  522827 type.go:168] "Request Body" body=""
	I1217 20:34:43.322997  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:43.323438  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:43.822136  522827 type.go:168] "Request Body" body=""
	I1217 20:34:43.822214  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:43.822552  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:44.322254  522827 type.go:168] "Request Body" body=""
	I1217 20:34:44.322328  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:44.322684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:44.822377  522827 type.go:168] "Request Body" body=""
	I1217 20:34:44.822446  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:44.822707  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:45.322396  522827 type.go:168] "Request Body" body=""
	I1217 20:34:45.322477  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:45.322826  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:45.322884  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:45.822533  522827 type.go:168] "Request Body" body=""
	I1217 20:34:45.822614  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:45.822967  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:46.322723  522827 type.go:168] "Request Body" body=""
	I1217 20:34:46.322799  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:46.323071  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:46.822878  522827 type.go:168] "Request Body" body=""
	I1217 20:34:46.822963  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:46.823309  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:47.322193  522827 type.go:168] "Request Body" body=""
	I1217 20:34:47.322271  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:47.322594  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:47.822176  522827 type.go:168] "Request Body" body=""
	I1217 20:34:47.822253  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:47.822576  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:47.822624  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:48.322276  522827 type.go:168] "Request Body" body=""
	I1217 20:34:48.322360  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:48.322716  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:48.822204  522827 type.go:168] "Request Body" body=""
	I1217 20:34:48.822283  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:48.822652  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:49.322200  522827 type.go:168] "Request Body" body=""
	I1217 20:34:49.322273  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:49.322585  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:49.822198  522827 type.go:168] "Request Body" body=""
	I1217 20:34:49.822275  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:49.822635  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:49.822689  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:50.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:34:50.322310  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:50.322638  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:50.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:34:50.822275  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:50.822586  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:51.322215  522827 type.go:168] "Request Body" body=""
	I1217 20:34:51.322292  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:51.322632  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:51.822330  522827 type.go:168] "Request Body" body=""
	I1217 20:34:51.822415  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:51.822753  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:51.822806  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:52.322585  522827 type.go:168] "Request Body" body=""
	I1217 20:34:52.322659  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:52.322934  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:52.822902  522827 type.go:168] "Request Body" body=""
	I1217 20:34:52.822975  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:52.823296  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:53.323063  522827 type.go:168] "Request Body" body=""
	I1217 20:34:53.323136  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:53.323470  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:53.822150  522827 type.go:168] "Request Body" body=""
	I1217 20:34:53.822229  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:53.822559  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:54.322241  522827 type.go:168] "Request Body" body=""
	I1217 20:34:54.322323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:54.322670  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:54.322729  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:54.822224  522827 type.go:168] "Request Body" body=""
	I1217 20:34:54.822309  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:54.822652  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:55.322322  522827 type.go:168] "Request Body" body=""
	I1217 20:34:55.322399  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:55.322652  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:55.822320  522827 type.go:168] "Request Body" body=""
	I1217 20:34:55.822399  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:55.822745  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:56.322224  522827 type.go:168] "Request Body" body=""
	I1217 20:34:56.322302  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:56.322656  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:56.822139  522827 type.go:168] "Request Body" body=""
	I1217 20:34:56.822217  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:56.822522  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:56.822571  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:57.322502  522827 type.go:168] "Request Body" body=""
	I1217 20:34:57.322575  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:57.322903  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:57.822521  522827 type.go:168] "Request Body" body=""
	I1217 20:34:57.822594  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:57.822915  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:58.322195  522827 type.go:168] "Request Body" body=""
	I1217 20:34:58.322269  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:58.322623  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:58.822280  522827 type.go:168] "Request Body" body=""
	I1217 20:34:58.822374  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:58.822695  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:34:58.822745  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:34:59.322231  522827 type.go:168] "Request Body" body=""
	I1217 20:34:59.322309  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:59.322669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:34:59.822371  522827 type.go:168] "Request Body" body=""
	I1217 20:34:59.822442  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:34:59.822756  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:00.322497  522827 type.go:168] "Request Body" body=""
	I1217 20:35:00.322580  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:00.322949  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:00.822996  522827 type.go:168] "Request Body" body=""
	I1217 20:35:00.823083  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:00.823467  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:00.823521  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:01.322212  522827 type.go:168] "Request Body" body=""
	I1217 20:35:01.322285  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:01.322553  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:01.822229  522827 type.go:168] "Request Body" body=""
	I1217 20:35:01.822306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:01.822627  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:02.322626  522827 type.go:168] "Request Body" body=""
	I1217 20:35:02.322709  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:02.323066  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:02.822977  522827 type.go:168] "Request Body" body=""
	I1217 20:35:02.823069  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:02.823348  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:03.323127  522827 type.go:168] "Request Body" body=""
	I1217 20:35:03.323211  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:03.323563  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:03.323642  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:03.822192  522827 type.go:168] "Request Body" body=""
	I1217 20:35:03.822280  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:03.822696  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:04.322196  522827 type.go:168] "Request Body" body=""
	I1217 20:35:04.322276  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:04.322589  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:04.822325  522827 type.go:168] "Request Body" body=""
	I1217 20:35:04.822408  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:04.822706  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:05.322377  522827 type.go:168] "Request Body" body=""
	I1217 20:35:05.322458  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:05.322803  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:05.822315  522827 type.go:168] "Request Body" body=""
	I1217 20:35:05.822428  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:05.822690  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:05.822728  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:06.322269  522827 type.go:168] "Request Body" body=""
	I1217 20:35:06.322367  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:06.322690  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:06.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:35:06.822331  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:06.822614  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:07.322502  522827 type.go:168] "Request Body" body=""
	I1217 20:35:07.322573  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:07.322817  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:07.822274  522827 type.go:168] "Request Body" body=""
	I1217 20:35:07.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:07.822698  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:07.822760  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:08.322442  522827 type.go:168] "Request Body" body=""
	I1217 20:35:08.322522  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:08.322845  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:08.822222  522827 type.go:168] "Request Body" body=""
	I1217 20:35:08.822296  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:08.822597  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:09.322251  522827 type.go:168] "Request Body" body=""
	I1217 20:35:09.322329  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:09.322650  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:09.822333  522827 type.go:168] "Request Body" body=""
	I1217 20:35:09.822408  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:09.822762  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:09.822817  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:10.322442  522827 type.go:168] "Request Body" body=""
	I1217 20:35:10.322538  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:10.322820  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:10.822275  522827 type.go:168] "Request Body" body=""
	I1217 20:35:10.822347  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:10.822634  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:11.322386  522827 type.go:168] "Request Body" body=""
	I1217 20:35:11.322464  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:11.322764  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:11.822226  522827 type.go:168] "Request Body" body=""
	I1217 20:35:11.822297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:11.822669  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:12.322679  522827 type.go:168] "Request Body" body=""
	I1217 20:35:12.322763  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:12.323067  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:12.323113  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:12.822935  522827 type.go:168] "Request Body" body=""
	I1217 20:35:12.823024  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:12.823355  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:13.323128  522827 type.go:168] "Request Body" body=""
	I1217 20:35:13.323210  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:13.323540  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:13.822246  522827 type.go:168] "Request Body" body=""
	I1217 20:35:13.822355  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:13.822636  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:14.322330  522827 type.go:168] "Request Body" body=""
	I1217 20:35:14.322406  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:14.322743  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:14.822304  522827 type.go:168] "Request Body" body=""
	I1217 20:35:14.822372  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:14.822645  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:14.822685  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:15.322347  522827 type.go:168] "Request Body" body=""
	I1217 20:35:15.322423  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:15.322800  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:15.822252  522827 type.go:168] "Request Body" body=""
	I1217 20:35:15.822332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:15.822662  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:16.322191  522827 type.go:168] "Request Body" body=""
	I1217 20:35:16.322260  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:16.322568  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:16.822255  522827 type.go:168] "Request Body" body=""
	I1217 20:35:16.822332  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:16.822681  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:16.822743  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:17.322820  522827 type.go:168] "Request Body" body=""
	I1217 20:35:17.322896  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:17.323309  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:17.823040  522827 type.go:168] "Request Body" body=""
	I1217 20:35:17.823109  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:17.823374  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:18.323149  522827 type.go:168] "Request Body" body=""
	I1217 20:35:18.323236  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:18.323572  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:18.822287  522827 type.go:168] "Request Body" body=""
	I1217 20:35:18.822373  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:18.822708  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:18.822767  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:19.322441  522827 type.go:168] "Request Body" body=""
	I1217 20:35:19.322515  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:19.322786  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:19.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:35:19.822312  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:19.822602  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:20.322257  522827 type.go:168] "Request Body" body=""
	I1217 20:35:20.322333  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:20.322679  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:20.822345  522827 type.go:168] "Request Body" body=""
	I1217 20:35:20.822415  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:20.822676  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:21.322243  522827 type.go:168] "Request Body" body=""
	I1217 20:35:21.322326  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:21.322660  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:21.322713  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:21.822270  522827 type.go:168] "Request Body" body=""
	I1217 20:35:21.822356  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:21.822667  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:22.322750  522827 type.go:168] "Request Body" body=""
	I1217 20:35:22.322821  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:22.323094  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:22.823051  522827 type.go:168] "Request Body" body=""
	I1217 20:35:22.823129  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:22.823477  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:23.322217  522827 type.go:168] "Request Body" body=""
	I1217 20:35:23.322297  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:23.322625  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:23.822303  522827 type.go:168] "Request Body" body=""
	I1217 20:35:23.822373  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:23.822637  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:23.822680  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:24.322343  522827 type.go:168] "Request Body" body=""
	I1217 20:35:24.322422  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:24.322779  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:24.822483  522827 type.go:168] "Request Body" body=""
	I1217 20:35:24.822568  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:24.822893  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:25.322199  522827 type.go:168] "Request Body" body=""
	I1217 20:35:25.322274  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:25.322559  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:25.822235  522827 type.go:168] "Request Body" body=""
	I1217 20:35:25.822320  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:25.822638  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:26.322257  522827 type.go:168] "Request Body" body=""
	I1217 20:35:26.322337  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:26.322663  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:26.322718  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:26.822248  522827 type.go:168] "Request Body" body=""
	I1217 20:35:26.822315  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:26.822587  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:27.322563  522827 type.go:168] "Request Body" body=""
	I1217 20:35:27.322640  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:27.322979  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:27.822237  522827 type.go:168] "Request Body" body=""
	I1217 20:35:27.822313  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:27.822672  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:28.322358  522827 type.go:168] "Request Body" body=""
	I1217 20:35:28.322427  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:28.322726  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:28.322768  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:28.822428  522827 type.go:168] "Request Body" body=""
	I1217 20:35:28.822502  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:28.822834  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:29.322244  522827 type.go:168] "Request Body" body=""
	I1217 20:35:29.322327  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:29.322664  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:29.822221  522827 type.go:168] "Request Body" body=""
	I1217 20:35:29.822293  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:29.822604  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:30.322244  522827 type.go:168] "Request Body" body=""
	I1217 20:35:30.322319  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:30.322684  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:30.822264  522827 type.go:168] "Request Body" body=""
	I1217 20:35:30.822339  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:30.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:30.822715  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:31.322385  522827 type.go:168] "Request Body" body=""
	I1217 20:35:31.322460  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:31.322798  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:31.822531  522827 type.go:168] "Request Body" body=""
	I1217 20:35:31.822610  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:31.822946  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:32.322713  522827 type.go:168] "Request Body" body=""
	I1217 20:35:32.322793  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:32.323145  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:32.822950  522827 type.go:168] "Request Body" body=""
	I1217 20:35:32.823025  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:32.823278  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:32.823318  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:33.323110  522827 type.go:168] "Request Body" body=""
	I1217 20:35:33.323192  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:33.323540  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:33.822246  522827 type.go:168] "Request Body" body=""
	I1217 20:35:33.822323  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:33.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:34.322218  522827 type.go:168] "Request Body" body=""
	I1217 20:35:34.322300  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:34.322661  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:34.822268  522827 type.go:168] "Request Body" body=""
	I1217 20:35:34.822347  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:34.822680  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:35.322230  522827 type.go:168] "Request Body" body=""
	I1217 20:35:35.322307  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:35.322640  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:35.322702  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:35.822209  522827 type.go:168] "Request Body" body=""
	I1217 20:35:35.822278  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:35.822595  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:36.322264  522827 type.go:168] "Request Body" body=""
	I1217 20:35:36.322343  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:36.322666  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:36.822232  522827 type.go:168] "Request Body" body=""
	I1217 20:35:36.822306  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:36.822658  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:37.322496  522827 type.go:168] "Request Body" body=""
	I1217 20:35:37.322571  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:37.322824  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:37.322862  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:37.822509  522827 type.go:168] "Request Body" body=""
	I1217 20:35:37.822586  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:37.822928  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:38.322513  522827 type.go:168] "Request Body" body=""
	I1217 20:35:38.322595  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:38.323137  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:38.822886  522827 type.go:168] "Request Body" body=""
	I1217 20:35:38.822959  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:38.823295  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:39.323106  522827 type.go:168] "Request Body" body=""
	I1217 20:35:39.323188  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:39.323560  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:39.323633  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:39.822201  522827 type.go:168] "Request Body" body=""
	I1217 20:35:39.822276  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:39.822619  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:40.322173  522827 type.go:168] "Request Body" body=""
	I1217 20:35:40.322246  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:40.322545  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:40.822242  522827 type.go:168] "Request Body" body=""
	I1217 20:35:40.822317  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:40.822754  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:41.322470  522827 type.go:168] "Request Body" body=""
	I1217 20:35:41.322556  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:41.322901  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:41.822209  522827 type.go:168] "Request Body" body=""
	I1217 20:35:41.822282  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:41.822536  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 20:35:41.822583  522827 node_ready.go:55] error getting node "functional-655452" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-655452": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 20:35:42.322519  522827 type.go:168] "Request Body" body=""
	I1217 20:35:42.322603  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:42.322998  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:42.822247  522827 type.go:168] "Request Body" body=""
	I1217 20:35:42.822336  522827 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-655452" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 20:35:42.822693  522827 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 20:35:43.322188  522827 type.go:168] "Request Body" body=""
	I1217 20:35:43.322249  522827 node_ready.go:38] duration metric: took 6m0.000239045s for node "functional-655452" to be "Ready" ...
	I1217 20:35:43.325291  522827 out.go:203] 
	W1217 20:35:43.328188  522827 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 20:35:43.328206  522827 out.go:285] * 
	W1217 20:35:43.330331  522827 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:35:43.333111  522827 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 20:35:52 functional-655452 crio[5447]: time="2025-12-17T20:35:52.248452743Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=8a037fb8-47fe-4682-a06b-c651dbe2b91e name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:53 functional-655452 crio[5447]: time="2025-12-17T20:35:53.342654144Z" level=info msg="Checking image status: minikube-local-cache-test:functional-655452" id=2bd5fa93-b92c-4f8d-a6f9-3bc1f05793ac name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:53 functional-655452 crio[5447]: time="2025-12-17T20:35:53.342823049Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 17 20:35:53 functional-655452 crio[5447]: time="2025-12-17T20:35:53.342864723Z" level=info msg="Image minikube-local-cache-test:functional-655452 not found" id=2bd5fa93-b92c-4f8d-a6f9-3bc1f05793ac name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:53 functional-655452 crio[5447]: time="2025-12-17T20:35:53.342939669Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-655452 found" id=2bd5fa93-b92c-4f8d-a6f9-3bc1f05793ac name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:53 functional-655452 crio[5447]: time="2025-12-17T20:35:53.367286402Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-655452" id=c3e35e06-0010-4a5b-9ef0-d2f451c83286 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:53 functional-655452 crio[5447]: time="2025-12-17T20:35:53.367426539Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-655452 not found" id=c3e35e06-0010-4a5b-9ef0-d2f451c83286 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:53 functional-655452 crio[5447]: time="2025-12-17T20:35:53.367467671Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-655452 found" id=c3e35e06-0010-4a5b-9ef0-d2f451c83286 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:53 functional-655452 crio[5447]: time="2025-12-17T20:35:53.39201649Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-655452" id=a13d7720-bb07-4b8f-9410-0a0d82ddbada name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:53 functional-655452 crio[5447]: time="2025-12-17T20:35:53.392181046Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-655452 not found" id=a13d7720-bb07-4b8f-9410-0a0d82ddbada name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:53 functional-655452 crio[5447]: time="2025-12-17T20:35:53.392245268Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-655452 found" id=a13d7720-bb07-4b8f-9410-0a0d82ddbada name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:54 functional-655452 crio[5447]: time="2025-12-17T20:35:54.358690122Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=e9c9a3c5-f0ec-491f-b467-a4fb566a7e4a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:54 functional-655452 crio[5447]: time="2025-12-17T20:35:54.699662766Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=2f58d9f9-d198-43e4-b155-576924a7469c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:54 functional-655452 crio[5447]: time="2025-12-17T20:35:54.699807563Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=2f58d9f9-d198-43e4-b155-576924a7469c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:54 functional-655452 crio[5447]: time="2025-12-17T20:35:54.699843494Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=2f58d9f9-d198-43e4-b155-576924a7469c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:55 functional-655452 crio[5447]: time="2025-12-17T20:35:55.242462245Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=8ee73752-fcec-461d-ad04-e7b693a40594 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:55 functional-655452 crio[5447]: time="2025-12-17T20:35:55.242603243Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=8ee73752-fcec-461d-ad04-e7b693a40594 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:55 functional-655452 crio[5447]: time="2025-12-17T20:35:55.242639059Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=8ee73752-fcec-461d-ad04-e7b693a40594 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:55 functional-655452 crio[5447]: time="2025-12-17T20:35:55.267407227Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=96ec17f9-40c0-4dcf-9b01-6d9e24b90fd5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:55 functional-655452 crio[5447]: time="2025-12-17T20:35:55.267560402Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=96ec17f9-40c0-4dcf-9b01-6d9e24b90fd5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:55 functional-655452 crio[5447]: time="2025-12-17T20:35:55.26762245Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=96ec17f9-40c0-4dcf-9b01-6d9e24b90fd5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:55 functional-655452 crio[5447]: time="2025-12-17T20:35:55.293587381Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=b9a60d8c-7a33-4a91-bdf0-5e02a9ced5db name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:55 functional-655452 crio[5447]: time="2025-12-17T20:35:55.293749548Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=b9a60d8c-7a33-4a91-bdf0-5e02a9ced5db name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:55 functional-655452 crio[5447]: time="2025-12-17T20:35:55.293805229Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=b9a60d8c-7a33-4a91-bdf0-5e02a9ced5db name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:35:55 functional-655452 crio[5447]: time="2025-12-17T20:35:55.867866993Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=7b807932-d5c4-4be7-9710-4a58a027c9d7 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:35:59.888916    9617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:35:59.889299    9617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:35:59.890893    9617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:35:59.891427    9617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:35:59.892923    9617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 19:02] hrtimer: interrupt took 71042327 ns
	[Dec17 20:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 20:12] overlayfs: idmapped layers are currently not supported
	[  +0.080288] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec17 20:17] overlayfs: idmapped layers are currently not supported
	[Dec17 20:18] overlayfs: idmapped layers are currently not supported
	[Dec17 20:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:35:59 up  3:18,  0 user,  load average: 0.56, 0.36, 0.91
	Linux functional-655452 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 20:35:57 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:35:58 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1156.
	Dec 17 20:35:58 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:58 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:58 functional-655452 kubelet[9503]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:58 functional-655452 kubelet[9503]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:58 functional-655452 kubelet[9503]: E1217 20:35:58.401358    9503 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:35:58 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:35:58 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:35:59 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1157.
	Dec 17 20:35:59 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:59 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:59 functional-655452 kubelet[9532]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:59 functional-655452 kubelet[9532]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:59 functional-655452 kubelet[9532]: E1217 20:35:59.130706    9532 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:35:59 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:35:59 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:35:59 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1158.
	Dec 17 20:35:59 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:59 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:35:59 functional-655452 kubelet[9618]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:59 functional-655452 kubelet[9618]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:35:59 functional-655452 kubelet[9618]: E1217 20:35:59.877110    9618 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:35:59 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:35:59 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452: exit status 2 (359.03135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-655452" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (2.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (734.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-655452 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 20:38:56.665147  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:40:30.851899  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:41:53.921979  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:43:56.661815  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:45:30.851868  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-655452 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m12.464633113s)

                                                
                                                
-- stdout --
	* [functional-655452] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-655452" primary control-plane node in "functional-655452" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001248714s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238438s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238438s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-655452 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m12.469122249s for "functional-655452" cluster.
I1217 20:48:13.725865  488412 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-655452
helpers_test.go:244: (dbg) docker inspect functional-655452:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	        "Created": "2025-12-17T20:21:23.127111799Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 517339,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:21:23.205943598Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hosts",
	        "LogPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f-json.log",
	        "Name": "/functional-655452",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-655452:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-655452",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	                "LowerDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8-init/diff:/var/lib/docker/overlay2/c4759b344ecb109c83d66e4ef56c76903f1d1e597efb86ab2d2c7911b1130a8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-655452",
	                "Source": "/var/lib/docker/volumes/functional-655452/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-655452",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-655452",
	                "name.minikube.sigs.k8s.io": "functional-655452",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "77ae7dbcf69b3201be4ce65a88d235dbad078142318e01a5939425f9766fb924",
	            "SandboxKey": "/var/run/docker/netns/77ae7dbcf69b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-655452": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:c6:98:b5:58:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b68cd6e69ccdb81b0870dd976e2d5401ca696480842d2c469293121d59435cfb",
	                    "EndpointID": "82926bb4b08b5f66131c1026a8bc72a582e9cb8c6d97a278a494965923f47cb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-655452",
	                        "ce7a6a54d14f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452: exit status 2 (297.220843ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-643319 image ls --format yaml --alsologtostderr                                                                                      │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ ssh     │ functional-643319 ssh pgrep buildkitd                                                                                                           │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ image   │ functional-643319 image build -t localhost/my-image:functional-643319 testdata/build --alsologtostderr                                          │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image   │ functional-643319 image ls --format json --alsologtostderr                                                                                      │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image   │ functional-643319 image ls --format table --alsologtostderr                                                                                     │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image   │ functional-643319 image ls                                                                                                                      │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ delete  │ -p functional-643319                                                                                                                            │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ start   │ -p functional-655452 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ start   │ -p functional-655452 --alsologtostderr -v=8                                                                                                     │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:29 UTC │                     │
	│ cache   │ functional-655452 cache add registry.k8s.io/pause:3.1                                                                                           │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ functional-655452 cache add registry.k8s.io/pause:3.3                                                                                           │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ functional-655452 cache add registry.k8s.io/pause:latest                                                                                        │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ functional-655452 cache add minikube-local-cache-test:functional-655452                                                                         │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ functional-655452 cache delete minikube-local-cache-test:functional-655452                                                                      │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ list                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ ssh     │ functional-655452 ssh sudo crictl images                                                                                                        │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ ssh     │ functional-655452 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                              │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ ssh     │ functional-655452 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │                     │
	│ cache   │ functional-655452 cache reload                                                                                                                  │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ ssh     │ functional-655452 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ kubectl │ functional-655452 kubectl -- --context functional-655452 get pods                                                                               │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │                     │
	│ start   │ -p functional-655452 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                        │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:36 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:36:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:36:01.304180  528764 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:36:01.304299  528764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:36:01.304303  528764 out.go:374] Setting ErrFile to fd 2...
	I1217 20:36:01.304307  528764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:36:01.304548  528764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:36:01.304941  528764 out.go:368] Setting JSON to false
	I1217 20:36:01.305793  528764 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11911,"bootTime":1765991851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:36:01.305860  528764 start.go:143] virtualization:  
	I1217 20:36:01.309940  528764 out.go:179] * [functional-655452] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:36:01.313178  528764 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:36:01.313261  528764 notify.go:221] Checking for updates...
	I1217 20:36:01.319276  528764 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:36:01.322533  528764 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:36:01.325481  528764 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:36:01.328332  528764 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:36:01.331257  528764 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:36:01.334638  528764 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:36:01.334735  528764 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:36:01.377324  528764 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:36:01.377436  528764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:36:01.442821  528764 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 20:36:01.432767342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:36:01.442911  528764 docker.go:319] overlay module found
	I1217 20:36:01.446093  528764 out.go:179] * Using the docker driver based on existing profile
	I1217 20:36:01.448835  528764 start.go:309] selected driver: docker
	I1217 20:36:01.448847  528764 start.go:927] validating driver "docker" against &{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:36:01.448948  528764 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:36:01.449055  528764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:36:01.502893  528764 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 20:36:01.493096577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:36:01.503296  528764 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:36:01.503325  528764 cni.go:84] Creating CNI manager for ""
	I1217 20:36:01.503373  528764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:36:01.503423  528764 start.go:353] cluster config:
	{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:36:01.506646  528764 out.go:179] * Starting "functional-655452" primary control-plane node in "functional-655452" cluster
	I1217 20:36:01.509580  528764 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:36:01.512594  528764 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:36:01.515481  528764 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:36:01.515521  528764 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1217 20:36:01.515533  528764 cache.go:65] Caching tarball of preloaded images
	I1217 20:36:01.515555  528764 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:36:01.515635  528764 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:36:01.515645  528764 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 20:36:01.515757  528764 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/config.json ...
	I1217 20:36:01.536964  528764 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:36:01.536994  528764 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:36:01.537012  528764 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:36:01.537046  528764 start.go:360] acquireMachinesLock for functional-655452: {Name:mk8b480106227149e1b79da3c4f580b9dc5e0f96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:36:01.537100  528764 start.go:364] duration metric: took 37.99µs to acquireMachinesLock for "functional-655452"
	I1217 20:36:01.537118  528764 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:36:01.537122  528764 fix.go:54] fixHost starting: 
	I1217 20:36:01.537383  528764 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:36:01.554557  528764 fix.go:112] recreateIfNeeded on functional-655452: state=Running err=<nil>
	W1217 20:36:01.554578  528764 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:36:01.557934  528764 out.go:252] * Updating the running docker "functional-655452" container ...
	I1217 20:36:01.557966  528764 machine.go:94] provisionDockerMachine start ...
	I1217 20:36:01.558073  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:01.576191  528764 main.go:143] libmachine: Using SSH client type: native
	I1217 20:36:01.576509  528764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:36:01.576515  528764 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:36:01.707478  528764 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-655452
	
	I1217 20:36:01.707493  528764 ubuntu.go:182] provisioning hostname "functional-655452"
	I1217 20:36:01.707564  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:01.725762  528764 main.go:143] libmachine: Using SSH client type: native
	I1217 20:36:01.726063  528764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:36:01.726071  528764 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-655452 && echo "functional-655452" | sudo tee /etc/hostname
	I1217 20:36:01.865176  528764 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-655452
	
	I1217 20:36:01.865255  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:01.884852  528764 main.go:143] libmachine: Using SSH client type: native
	I1217 20:36:01.885159  528764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:36:01.885174  528764 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-655452' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-655452/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-655452' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:36:02.016339  528764 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:36:02.016355  528764 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:36:02.016378  528764 ubuntu.go:190] setting up certificates
	I1217 20:36:02.016388  528764 provision.go:84] configureAuth start
	I1217 20:36:02.016451  528764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-655452
	I1217 20:36:02.035106  528764 provision.go:143] copyHostCerts
	I1217 20:36:02.035175  528764 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:36:02.035183  528764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:36:02.035257  528764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:36:02.035375  528764 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:36:02.035379  528764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:36:02.035406  528764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:36:02.035470  528764 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:36:02.035473  528764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:36:02.035496  528764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:36:02.035545  528764 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.functional-655452 san=[127.0.0.1 192.168.49.2 functional-655452 localhost minikube]
	I1217 20:36:02.115164  528764 provision.go:177] copyRemoteCerts
	I1217 20:36:02.115221  528764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:36:02.115260  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:02.139076  528764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:36:02.235601  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:36:02.254294  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 20:36:02.272604  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 20:36:02.290727  528764 provision.go:87] duration metric: took 274.326255ms to configureAuth
	I1217 20:36:02.290752  528764 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:36:02.291001  528764 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:36:02.291105  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:02.309578  528764 main.go:143] libmachine: Using SSH client type: native
	I1217 20:36:02.309891  528764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:36:02.309902  528764 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:36:02.644802  528764 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:36:02.644817  528764 machine.go:97] duration metric: took 1.086843683s to provisionDockerMachine
	I1217 20:36:02.644827  528764 start.go:293] postStartSetup for "functional-655452" (driver="docker")
	I1217 20:36:02.644838  528764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:36:02.644899  528764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:36:02.644944  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:02.663334  528764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:36:02.759464  528764 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:36:02.762934  528764 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:36:02.762952  528764 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:36:02.762970  528764 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:36:02.763029  528764 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:36:02.763103  528764 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:36:02.763175  528764 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts -> hosts in /etc/test/nested/copy/488412
	I1217 20:36:02.763216  528764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/488412
	I1217 20:36:02.770652  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:36:02.788458  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts --> /etc/test/nested/copy/488412/hosts (40 bytes)
	I1217 20:36:02.805971  528764 start.go:296] duration metric: took 161.129975ms for postStartSetup
	I1217 20:36:02.806055  528764 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:36:02.806105  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:02.832327  528764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:36:02.932517  528764 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:36:02.937022  528764 fix.go:56] duration metric: took 1.399892436s for fixHost
	I1217 20:36:02.937037  528764 start.go:83] releasing machines lock for "functional-655452", held for 1.399929845s
	I1217 20:36:02.937101  528764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-655452
	I1217 20:36:02.954767  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:36:02.954820  528764 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:36:02.954828  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:36:02.954855  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:36:02.954880  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:36:02.954903  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:36:02.954966  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:36:02.955032  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:36:02.955078  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:02.972629  528764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:36:03.082963  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:36:03.101544  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:36:03.119807  528764 ssh_runner.go:195] Run: openssl version
	I1217 20:36:03.126345  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:36:03.134006  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:36:03.141755  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:36:03.145627  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:36:03.145694  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:36:03.186918  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:36:03.196074  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:36:03.205007  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:36:03.212820  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:36:03.216798  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:36:03.216865  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:36:03.260241  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:36:03.268200  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:03.275663  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:36:03.283259  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:03.287077  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:03.287187  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:03.328526  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:36:03.336152  528764 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:36:03.339768  528764 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 20:36:03.343092  528764 ssh_runner.go:195] Run: cat /version.json
	I1217 20:36:03.343166  528764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:36:03.444762  528764 ssh_runner.go:195] Run: systemctl --version
	I1217 20:36:03.450992  528764 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:36:03.489251  528764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:36:03.493525  528764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:36:03.493594  528764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:36:03.501380  528764 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:36:03.501400  528764 start.go:496] detecting cgroup driver to use...
	I1217 20:36:03.501430  528764 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:36:03.501474  528764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:36:03.519927  528764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:36:03.535865  528764 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:36:03.535924  528764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:36:03.553665  528764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:36:03.568077  528764 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:36:03.688788  528764 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:36:03.816391  528764 docker.go:234] disabling docker service ...
	I1217 20:36:03.816445  528764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:36:03.832743  528764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:36:03.846562  528764 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:36:03.965969  528764 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:36:04.109607  528764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:36:04.122680  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:36:04.137683  528764 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:36:04.137752  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.147364  528764 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:36:04.147423  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.157452  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.166810  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.176014  528764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:36:04.184171  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.192938  528764 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.201542  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.210110  528764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:36:04.217743  528764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:36:04.225321  528764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:36:04.332263  528764 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:36:04.503245  528764 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:36:04.503305  528764 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:36:04.508393  528764 start.go:564] Will wait 60s for crictl version
	I1217 20:36:04.508461  528764 ssh_runner.go:195] Run: which crictl
	I1217 20:36:04.512401  528764 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:36:04.541968  528764 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:36:04.542059  528764 ssh_runner.go:195] Run: crio --version
	I1217 20:36:04.568941  528764 ssh_runner.go:195] Run: crio --version
	I1217 20:36:04.602248  528764 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 20:36:04.604894  528764 cli_runner.go:164] Run: docker network inspect functional-655452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:36:04.620832  528764 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:36:04.627460  528764 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 20:36:04.630066  528764 kubeadm.go:884] updating cluster {Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:36:04.630187  528764 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:36:04.630246  528764 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:36:04.668067  528764 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:36:04.668079  528764 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:36:04.668136  528764 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:36:04.698017  528764 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:36:04.698030  528764 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:36:04.698036  528764 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1217 20:36:04.698140  528764 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-655452 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:36:04.698216  528764 ssh_runner.go:195] Run: crio config
	I1217 20:36:04.769162  528764 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 20:36:04.769193  528764 cni.go:84] Creating CNI manager for ""
	I1217 20:36:04.769200  528764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:36:04.769208  528764 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:36:04.769233  528764 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-655452 NodeName:functional-655452 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:36:04.769373  528764 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-655452"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:36:04.769444  528764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 20:36:04.777167  528764 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:36:04.777239  528764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:36:04.784566  528764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 20:36:04.797984  528764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 20:36:04.810563  528764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I1217 20:36:04.823513  528764 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:36:04.827291  528764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:36:04.950251  528764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:36:05.072220  528764 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452 for IP: 192.168.49.2
	I1217 20:36:05.072231  528764 certs.go:195] generating shared ca certs ...
	I1217 20:36:05.072245  528764 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:36:05.072401  528764 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:36:05.072442  528764 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:36:05.072448  528764 certs.go:257] generating profile certs ...
	I1217 20:36:05.072540  528764 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key
	I1217 20:36:05.072591  528764 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key.aa95dda5
	I1217 20:36:05.072629  528764 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key
	I1217 20:36:05.072739  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:36:05.072768  528764 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:36:05.072780  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:36:05.072805  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:36:05.072827  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:36:05.072848  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:36:05.072891  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:36:05.073535  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:36:05.100676  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:36:05.124485  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:36:05.145313  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:36:05.166267  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 20:36:05.185043  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 20:36:05.202568  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:36:05.220530  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:36:05.238845  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:36:05.257230  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:36:05.275490  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:36:05.293936  528764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:36:05.307062  528764 ssh_runner.go:195] Run: openssl version
	I1217 20:36:05.314048  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:36:05.321882  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:36:05.329752  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:36:05.333743  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:36:05.333820  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:36:05.375575  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:36:05.383326  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:36:05.390831  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:36:05.398670  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:36:05.402451  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:36:05.402506  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:36:05.445761  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:36:05.453165  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:05.460611  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:36:05.468452  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:05.472228  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:05.472283  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:05.513950  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:36:05.521563  528764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:36:05.525764  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:36:05.567120  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:36:05.608840  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:36:05.649788  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:36:05.692741  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:36:05.738724  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:36:05.779654  528764 kubeadm.go:401] StartCluster: {Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:36:05.779744  528764 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:36:05.779806  528764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:36:05.806396  528764 cri.go:89] found id: ""
	I1217 20:36:05.806453  528764 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:36:05.814019  528764 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 20:36:05.814027  528764 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 20:36:05.814076  528764 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 20:36:05.823754  528764 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:36:05.824259  528764 kubeconfig.go:125] found "functional-655452" server: "https://192.168.49.2:8441"
	I1217 20:36:05.825529  528764 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 20:36:05.834629  528764 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 20:21:29.177912325 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 20:36:04.817890668 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 20:36:05.834639  528764 kubeadm.go:1161] stopping kube-system containers ...
	I1217 20:36:05.834650  528764 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1217 20:36:05.834705  528764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:36:05.867919  528764 cri.go:89] found id: ""
	I1217 20:36:05.867989  528764 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 20:36:05.885438  528764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:36:05.893366  528764 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 17 20:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 17 20:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 17 20:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 17 20:25 /etc/kubernetes/scheduler.conf
	
	I1217 20:36:05.893420  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 20:36:05.901137  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 20:36:05.909490  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:36:05.909550  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:36:05.916910  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 20:36:05.924811  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:36:05.924869  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:36:05.932331  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 20:36:05.940039  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:36:05.940108  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:36:05.947225  528764 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:36:05.955062  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:36:06.001485  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:36:07.569758  528764 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.568246795s)
	I1217 20:36:07.569817  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:36:07.780039  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:36:07.827231  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:36:07.887398  528764 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:36:07.887476  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:08.388398  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:08.888310  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:09.388248  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:09.887698  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:10.387671  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:10.887697  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:11.387734  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:11.888366  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:12.388180  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:12.888379  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:13.387943  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:13.887667  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:14.388477  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:14.888341  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:15.388247  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:15.888425  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:16.388580  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:16.888356  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:17.387968  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:17.888549  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:18.388370  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:18.887715  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:19.387565  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:19.887775  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:20.388470  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:20.888348  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:21.388333  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:21.888012  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:22.387716  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:22.887746  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:23.388395  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:23.887695  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:24.387756  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:24.887696  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:25.388493  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:25.888451  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:26.387822  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:26.888379  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:27.388361  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:27.888017  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:28.388584  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:28.887763  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:29.388547  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:29.887757  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:30.387781  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:30.888609  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:31.387635  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:31.888171  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:32.388412  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:32.888528  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:33.387792  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:33.888580  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:34.388192  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:34.888392  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:35.388250  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:35.888600  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:36.388467  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:36.887895  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:37.387730  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:37.888542  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:38.388614  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:38.888493  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:39.387705  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:39.887637  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:40.388516  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:40.887751  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:41.387675  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:41.888681  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:42.387731  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:42.887637  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:43.388408  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:43.888201  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:44.387929  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:44.888382  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:45.387742  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:45.887563  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:46.388569  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:46.888449  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:47.388453  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:47.888066  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:48.387738  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:48.888486  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:49.388004  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:49.887783  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:50.388587  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:50.887797  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:51.388583  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:51.888281  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:52.387751  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:52.888303  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:53.388442  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:53.887964  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:54.387766  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:54.887669  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:55.388318  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:55.888676  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:56.387669  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:56.888505  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:57.387758  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:57.888403  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:58.388534  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:58.887712  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:59.388454  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:59.888308  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:00.387737  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:00.887766  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:01.387557  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:01.888179  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:02.387975  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:02.887807  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:03.387768  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:03.887658  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:04.387571  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:04.887653  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:05.388569  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:05.887566  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:06.387577  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:06.887577  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:07.388433  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:07.887764  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:07.887843  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:07.914157  528764 cri.go:89] found id: ""
	I1217 20:37:07.914172  528764 logs.go:282] 0 containers: []
	W1217 20:37:07.914179  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:07.914184  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:07.914241  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:07.939801  528764 cri.go:89] found id: ""
	I1217 20:37:07.939815  528764 logs.go:282] 0 containers: []
	W1217 20:37:07.939823  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:07.939828  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:07.939892  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:07.966197  528764 cri.go:89] found id: ""
	I1217 20:37:07.966213  528764 logs.go:282] 0 containers: []
	W1217 20:37:07.966221  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:07.966226  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:07.966284  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:07.997124  528764 cri.go:89] found id: ""
	I1217 20:37:07.997138  528764 logs.go:282] 0 containers: []
	W1217 20:37:07.997145  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:07.997150  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:07.997211  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:08.028280  528764 cri.go:89] found id: ""
	I1217 20:37:08.028295  528764 logs.go:282] 0 containers: []
	W1217 20:37:08.028302  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:08.028308  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:08.028368  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:08.058094  528764 cri.go:89] found id: ""
	I1217 20:37:08.058109  528764 logs.go:282] 0 containers: []
	W1217 20:37:08.058116  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:08.058121  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:08.058185  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:08.085720  528764 cri.go:89] found id: ""
	I1217 20:37:08.085736  528764 logs.go:282] 0 containers: []
	W1217 20:37:08.085744  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:08.085752  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:08.085763  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:08.150624  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:08.142553   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.142992   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.144649   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.145220   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.146782   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:08.142553   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.142992   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.144649   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.145220   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.146782   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:08.150636  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:08.150647  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:08.217929  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:08.217949  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:08.250550  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:08.250567  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:08.318542  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:08.318562  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:10.835004  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:10.846829  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:10.846892  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:10.877739  528764 cri.go:89] found id: ""
	I1217 20:37:10.877756  528764 logs.go:282] 0 containers: []
	W1217 20:37:10.877762  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:10.877768  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:10.877829  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:10.903713  528764 cri.go:89] found id: ""
	I1217 20:37:10.903727  528764 logs.go:282] 0 containers: []
	W1217 20:37:10.903735  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:10.903740  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:10.903802  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:10.931733  528764 cri.go:89] found id: ""
	I1217 20:37:10.931747  528764 logs.go:282] 0 containers: []
	W1217 20:37:10.931754  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:10.931759  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:10.931818  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:10.957707  528764 cri.go:89] found id: ""
	I1217 20:37:10.957722  528764 logs.go:282] 0 containers: []
	W1217 20:37:10.957729  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:10.957735  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:10.957793  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:10.986438  528764 cri.go:89] found id: ""
	I1217 20:37:10.986452  528764 logs.go:282] 0 containers: []
	W1217 20:37:10.986459  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:10.986464  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:10.986530  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:11.014361  528764 cri.go:89] found id: ""
	I1217 20:37:11.014385  528764 logs.go:282] 0 containers: []
	W1217 20:37:11.014393  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:11.014402  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:11.014462  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:11.041366  528764 cri.go:89] found id: ""
	I1217 20:37:11.041381  528764 logs.go:282] 0 containers: []
	W1217 20:37:11.041388  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:11.041401  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:11.041411  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:11.056502  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:11.056519  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:11.122467  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:11.114176   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.114734   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.116369   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.116862   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.118392   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:11.114176   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.114734   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.116369   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.116862   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.118392   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:11.122477  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:11.122486  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:11.190244  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:11.190265  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:11.220700  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:11.220717  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:13.792757  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:13.802840  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:13.802899  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:13.836386  528764 cri.go:89] found id: ""
	I1217 20:37:13.836401  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.836408  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:13.836415  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:13.836471  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:13.870570  528764 cri.go:89] found id: ""
	I1217 20:37:13.870585  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.870592  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:13.870597  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:13.870656  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:13.898823  528764 cri.go:89] found id: ""
	I1217 20:37:13.898837  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.898845  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:13.898850  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:13.898908  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:13.926200  528764 cri.go:89] found id: ""
	I1217 20:37:13.926214  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.926221  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:13.926226  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:13.926284  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:13.952625  528764 cri.go:89] found id: ""
	I1217 20:37:13.952639  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.952647  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:13.952652  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:13.952711  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:13.978517  528764 cri.go:89] found id: ""
	I1217 20:37:13.978531  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.978539  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:13.978544  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:13.978602  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:14.010201  528764 cri.go:89] found id: ""
	I1217 20:37:14.010215  528764 logs.go:282] 0 containers: []
	W1217 20:37:14.010223  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:14.010231  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:14.010242  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:14.075917  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:14.075936  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:14.091123  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:14.091142  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:14.155624  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:14.146305   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.147214   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.149124   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.149628   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.151335   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:14.146305   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.147214   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.149124   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.149628   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.151335   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:14.155636  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:14.155647  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:14.224215  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:14.224237  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:16.756286  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:16.766692  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:16.766752  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:16.795671  528764 cri.go:89] found id: ""
	I1217 20:37:16.795692  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.795700  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:16.795705  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:16.795762  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:16.829850  528764 cri.go:89] found id: ""
	I1217 20:37:16.829863  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.829870  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:16.829875  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:16.829932  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:16.860495  528764 cri.go:89] found id: ""
	I1217 20:37:16.860509  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.860516  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:16.860521  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:16.860580  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:16.888120  528764 cri.go:89] found id: ""
	I1217 20:37:16.888133  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.888141  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:16.888146  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:16.888201  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:16.918449  528764 cri.go:89] found id: ""
	I1217 20:37:16.918463  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.918469  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:16.918484  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:16.918542  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:16.948626  528764 cri.go:89] found id: ""
	I1217 20:37:16.948652  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.948659  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:16.948665  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:16.948729  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:16.977608  528764 cri.go:89] found id: ""
	I1217 20:37:16.977622  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.977630  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:16.977637  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:16.977647  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:17.042493  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:17.042513  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:17.057131  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:17.057148  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:17.125378  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:17.116772   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.117492   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.119295   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.119699   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.121218   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:17.116772   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.117492   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.119295   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.119699   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.121218   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:17.125389  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:17.125400  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:17.192802  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:17.192822  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:19.720869  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:19.730761  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:19.730822  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:19.757595  528764 cri.go:89] found id: ""
	I1217 20:37:19.757609  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.757617  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:19.757622  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:19.757679  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:19.783074  528764 cri.go:89] found id: ""
	I1217 20:37:19.783087  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.783102  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:19.783108  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:19.783165  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:19.810405  528764 cri.go:89] found id: ""
	I1217 20:37:19.810419  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.810426  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:19.810432  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:19.810493  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:19.837744  528764 cri.go:89] found id: ""
	I1217 20:37:19.837758  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.837766  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:19.837771  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:19.837828  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:19.873857  528764 cri.go:89] found id: ""
	I1217 20:37:19.873872  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.873879  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:19.873884  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:19.873952  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:19.902376  528764 cri.go:89] found id: ""
	I1217 20:37:19.902390  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.902397  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:19.902402  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:19.902477  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:19.928530  528764 cri.go:89] found id: ""
	I1217 20:37:19.928544  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.928552  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:19.928559  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:19.928570  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:19.993175  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:19.984569   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.985337   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.986955   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.987601   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.989181   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:19.984569   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.985337   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.986955   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.987601   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.989181   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:19.993185  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:19.993196  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:20.066305  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:20.066326  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:20.099789  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:20.099806  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:20.165283  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:20.165304  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:22.681290  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:22.691134  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:22.691202  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:22.723831  528764 cri.go:89] found id: ""
	I1217 20:37:22.723845  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.723862  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:22.723868  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:22.723933  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:22.749315  528764 cri.go:89] found id: ""
	I1217 20:37:22.749329  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.749336  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:22.749341  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:22.749396  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:22.773712  528764 cri.go:89] found id: ""
	I1217 20:37:22.773738  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.773746  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:22.773751  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:22.773825  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:22.799128  528764 cri.go:89] found id: ""
	I1217 20:37:22.799147  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.799154  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:22.799159  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:22.799214  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:22.830333  528764 cri.go:89] found id: ""
	I1217 20:37:22.830347  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.830354  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:22.830359  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:22.830414  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:22.857658  528764 cri.go:89] found id: ""
	I1217 20:37:22.857671  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.857678  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:22.857683  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:22.857740  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:22.892187  528764 cri.go:89] found id: ""
	I1217 20:37:22.892202  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.892209  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:22.892217  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:22.892226  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:22.963552  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:22.963572  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:22.992259  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:22.992274  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:23.058615  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:23.058636  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:23.073409  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:23.073442  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:23.138641  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:23.129846   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.130677   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.132360   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.132912   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.134580   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:23.129846   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.130677   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.132360   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.132912   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.134580   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:25.638919  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:25.648946  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:25.649032  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:25.678111  528764 cri.go:89] found id: ""
	I1217 20:37:25.678127  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.678134  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:25.678140  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:25.678230  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:25.704834  528764 cri.go:89] found id: ""
	I1217 20:37:25.704848  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.704855  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:25.704861  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:25.704943  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:25.731274  528764 cri.go:89] found id: ""
	I1217 20:37:25.731287  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.731295  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:25.731300  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:25.731354  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:25.756601  528764 cri.go:89] found id: ""
	I1217 20:37:25.756615  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.756622  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:25.756628  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:25.756689  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:25.781743  528764 cri.go:89] found id: ""
	I1217 20:37:25.781757  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.781764  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:25.781787  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:25.781846  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:25.810686  528764 cri.go:89] found id: ""
	I1217 20:37:25.810699  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.810718  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:25.810724  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:25.810791  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:25.861184  528764 cri.go:89] found id: ""
	I1217 20:37:25.861200  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.861207  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:25.861215  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:25.861237  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:25.937980  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:25.938000  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:25.953961  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:25.953980  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:26.020362  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:26.011417   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.012115   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.013886   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.014423   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.016131   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:26.011417   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.012115   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.013886   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.014423   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.016131   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:26.020376  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:26.020387  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:26.092647  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:26.092669  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:28.622440  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:28.632675  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:28.632735  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:28.657198  528764 cri.go:89] found id: ""
	I1217 20:37:28.657213  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.657220  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:28.657226  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:28.657284  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:28.683432  528764 cri.go:89] found id: ""
	I1217 20:37:28.683446  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.683453  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:28.683458  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:28.683513  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:28.708948  528764 cri.go:89] found id: ""
	I1217 20:37:28.708962  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.708969  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:28.708975  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:28.709030  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:28.738615  528764 cri.go:89] found id: ""
	I1217 20:37:28.738629  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.738637  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:28.738642  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:28.738697  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:28.764458  528764 cri.go:89] found id: ""
	I1217 20:37:28.764472  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.764479  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:28.764484  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:28.764544  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:28.789220  528764 cri.go:89] found id: ""
	I1217 20:37:28.789234  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.789242  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:28.789247  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:28.789302  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:28.813820  528764 cri.go:89] found id: ""
	I1217 20:37:28.813835  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.813841  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:28.813848  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:28.813869  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:28.896349  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:28.887880   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.888593   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.890336   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.890868   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.892434   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:28.887880   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.888593   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.890336   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.890868   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.892434   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:28.896359  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:28.896369  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:28.964976  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:28.964996  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:28.995089  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:28.995105  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:29.073565  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:29.073593  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:31.589038  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:31.599070  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:31.599131  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:31.624604  528764 cri.go:89] found id: ""
	I1217 20:37:31.624619  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.624626  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:31.624631  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:31.624688  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:31.650593  528764 cri.go:89] found id: ""
	I1217 20:37:31.650608  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.650616  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:31.650621  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:31.650684  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:31.679069  528764 cri.go:89] found id: ""
	I1217 20:37:31.679084  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.679091  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:31.679096  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:31.679153  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:31.709079  528764 cri.go:89] found id: ""
	I1217 20:37:31.709093  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.709100  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:31.709105  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:31.709162  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:31.740223  528764 cri.go:89] found id: ""
	I1217 20:37:31.740237  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.740244  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:31.740252  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:31.740307  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:31.771855  528764 cri.go:89] found id: ""
	I1217 20:37:31.771869  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.771877  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:31.771883  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:31.771942  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:31.798992  528764 cri.go:89] found id: ""
	I1217 20:37:31.799006  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.799013  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:31.799021  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:31.799031  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:31.876265  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:31.876285  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:31.912678  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:31.912694  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:31.979473  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:31.979494  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:31.994138  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:31.994154  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:32.058919  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:32.050836   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.051662   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.053342   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.053690   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.055195   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:32.050836   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.051662   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.053342   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.053690   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.055195   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:34.560573  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:34.570410  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:34.570477  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:34.595394  528764 cri.go:89] found id: ""
	I1217 20:37:34.595407  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.595415  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:34.595420  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:34.595474  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:34.620347  528764 cri.go:89] found id: ""
	I1217 20:37:34.620362  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.620376  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:34.620382  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:34.620444  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:34.646173  528764 cri.go:89] found id: ""
	I1217 20:37:34.646188  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.646195  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:34.646200  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:34.646259  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:34.675076  528764 cri.go:89] found id: ""
	I1217 20:37:34.675090  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.675098  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:34.675103  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:34.675160  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:34.700382  528764 cri.go:89] found id: ""
	I1217 20:37:34.700396  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.700403  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:34.700414  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:34.700479  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:34.727372  528764 cri.go:89] found id: ""
	I1217 20:37:34.727387  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.727394  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:34.727400  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:34.727456  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:34.753290  528764 cri.go:89] found id: ""
	I1217 20:37:34.753305  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.753312  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:34.753319  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:34.753331  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:34.782001  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:34.782019  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:34.847492  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:34.847511  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:34.863498  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:34.863515  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:34.939936  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:34.931817   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.932582   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.934298   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.934763   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.936241   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:34.931817   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.932582   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.934298   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.934763   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.936241   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:34.939947  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:34.939958  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:37.511892  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:37.522041  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:37.522101  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:37.546092  528764 cri.go:89] found id: ""
	I1217 20:37:37.546106  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.546113  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:37.546119  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:37.546179  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:37.571827  528764 cri.go:89] found id: ""
	I1217 20:37:37.571841  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.571848  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:37.571853  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:37.571912  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:37.597752  528764 cri.go:89] found id: ""
	I1217 20:37:37.597766  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.597774  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:37.597779  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:37.597840  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:37.624088  528764 cri.go:89] found id: ""
	I1217 20:37:37.624102  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.624109  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:37.624114  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:37.624170  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:37.651097  528764 cri.go:89] found id: ""
	I1217 20:37:37.651112  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.651119  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:37.651125  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:37.651188  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:37.678706  528764 cri.go:89] found id: ""
	I1217 20:37:37.678720  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.678728  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:37.678743  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:37.678804  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:37.705805  528764 cri.go:89] found id: ""
	I1217 20:37:37.705817  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.705825  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:37.705833  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:37.705844  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:37.721021  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:37.721041  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:37.788297  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:37.779817   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.780331   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.782117   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.782612   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.784219   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:37.779817   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.780331   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.782117   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.782612   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.784219   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:37.788308  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:37.788318  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:37.865227  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:37.865247  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:37.897290  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:37.897308  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:40.462446  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:40.472823  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:40.472885  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:40.502899  528764 cri.go:89] found id: ""
	I1217 20:37:40.502914  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.502926  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:40.502931  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:40.502988  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:40.528131  528764 cri.go:89] found id: ""
	I1217 20:37:40.528144  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.528151  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:40.528156  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:40.528214  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:40.552632  528764 cri.go:89] found id: ""
	I1217 20:37:40.552646  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.552653  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:40.552659  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:40.552715  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:40.578013  528764 cri.go:89] found id: ""
	I1217 20:37:40.578028  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.578035  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:40.578042  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:40.578100  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:40.604172  528764 cri.go:89] found id: ""
	I1217 20:37:40.604186  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.604193  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:40.604198  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:40.604253  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:40.629837  528764 cri.go:89] found id: ""
	I1217 20:37:40.629851  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.629867  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:40.629872  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:40.629931  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:40.656555  528764 cri.go:89] found id: ""
	I1217 20:37:40.656568  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.656576  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:40.656583  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:40.656593  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:40.670930  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:40.670946  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:40.736814  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:40.728916   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.729413   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.731058   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.731457   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.733254   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:40.728916   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.729413   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.731058   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.731457   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.733254   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:40.736824  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:40.736835  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:40.803782  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:40.803800  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:40.851556  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:40.851572  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:43.430627  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:43.440939  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:43.441000  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:43.470749  528764 cri.go:89] found id: ""
	I1217 20:37:43.470764  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.470771  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:43.470777  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:43.470833  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:43.495753  528764 cri.go:89] found id: ""
	I1217 20:37:43.495766  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.495774  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:43.495779  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:43.495836  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:43.521880  528764 cri.go:89] found id: ""
	I1217 20:37:43.521896  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.521903  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:43.521908  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:43.521971  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:43.547990  528764 cri.go:89] found id: ""
	I1217 20:37:43.548004  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.548012  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:43.548018  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:43.548080  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:43.576401  528764 cri.go:89] found id: ""
	I1217 20:37:43.576415  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.576422  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:43.576427  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:43.576485  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:43.604828  528764 cri.go:89] found id: ""
	I1217 20:37:43.604840  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.604848  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:43.604853  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:43.604909  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:43.636907  528764 cri.go:89] found id: ""
	I1217 20:37:43.636920  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.636927  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:43.636935  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:43.636945  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:43.701148  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:43.701165  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:43.715342  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:43.715357  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:43.787937  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:43.780156   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.780601   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.782216   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.782718   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.784167   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:43.780156   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.780601   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.782216   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.782718   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.784167   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:43.787957  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:43.787968  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:43.858959  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:43.858978  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:46.395799  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:46.406118  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:46.406190  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:46.433062  528764 cri.go:89] found id: ""
	I1217 20:37:46.433076  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.433083  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:46.433089  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:46.433151  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:46.459553  528764 cri.go:89] found id: ""
	I1217 20:37:46.459568  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.459575  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:46.459604  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:46.459668  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:46.484831  528764 cri.go:89] found id: ""
	I1217 20:37:46.484845  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.484853  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:46.484858  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:46.484920  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:46.509669  528764 cri.go:89] found id: ""
	I1217 20:37:46.509683  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.509690  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:46.509695  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:46.509752  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:46.534227  528764 cri.go:89] found id: ""
	I1217 20:37:46.534242  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.534254  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:46.534260  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:46.534316  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:46.563383  528764 cri.go:89] found id: ""
	I1217 20:37:46.563397  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.563405  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:46.563411  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:46.563476  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:46.589321  528764 cri.go:89] found id: ""
	I1217 20:37:46.589335  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.589342  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:46.589350  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:46.589364  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:46.654894  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:46.654914  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:46.669806  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:46.669822  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:46.731726  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:46.723562   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.724128   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.725849   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.726343   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.727890   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:46.723562   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.724128   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.725849   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.726343   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.727890   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:46.731737  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:46.731763  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:46.799300  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:46.799320  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:49.348034  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:49.358157  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:49.358218  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:49.382823  528764 cri.go:89] found id: ""
	I1217 20:37:49.382837  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.382844  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:49.382849  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:49.382917  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:49.409079  528764 cri.go:89] found id: ""
	I1217 20:37:49.409094  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.409101  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:49.409106  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:49.409162  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:49.434313  528764 cri.go:89] found id: ""
	I1217 20:37:49.434327  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.434340  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:49.434354  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:49.434426  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:49.460512  528764 cri.go:89] found id: ""
	I1217 20:37:49.460527  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.460535  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:49.460551  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:49.460609  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:49.486735  528764 cri.go:89] found id: ""
	I1217 20:37:49.486748  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.486756  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:49.486762  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:49.486830  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:49.512071  528764 cri.go:89] found id: ""
	I1217 20:37:49.512085  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.512092  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:49.512098  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:49.512155  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:49.541263  528764 cri.go:89] found id: ""
	I1217 20:37:49.541277  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.541284  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:49.541293  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:49.541310  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:49.570361  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:49.570378  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:49.638598  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:49.638618  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:49.653362  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:49.653381  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:49.715767  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:49.708109   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.708580   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.710179   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.710574   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.712039   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:49.708109   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.708580   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.710179   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.710574   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.712039   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:49.715778  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:49.715788  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:52.283800  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:52.293434  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:52.293494  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:52.318791  528764 cri.go:89] found id: ""
	I1217 20:37:52.318805  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.318812  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:52.318818  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:52.318876  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:52.344510  528764 cri.go:89] found id: ""
	I1217 20:37:52.344525  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.344543  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:52.344549  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:52.344607  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:52.369118  528764 cri.go:89] found id: ""
	I1217 20:37:52.369132  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.369140  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:52.369145  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:52.369200  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:52.394333  528764 cri.go:89] found id: ""
	I1217 20:37:52.394346  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.394377  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:52.394383  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:52.394448  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:52.419501  528764 cri.go:89] found id: ""
	I1217 20:37:52.419525  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.419532  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:52.419537  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:52.419626  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:52.448909  528764 cri.go:89] found id: ""
	I1217 20:37:52.448923  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.448930  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:52.448936  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:52.449018  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:52.478490  528764 cri.go:89] found id: ""
	I1217 20:37:52.478513  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.478521  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:52.478529  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:52.478539  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:52.542920  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:52.542939  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:52.558035  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:52.558052  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:52.621690  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:52.613254   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.613953   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.615616   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.615996   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.617541   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:52.613254   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.613953   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.615616   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.615996   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.617541   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:52.621710  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:52.621721  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:52.689051  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:52.689070  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:55.225326  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:55.235484  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:55.235545  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:55.260455  528764 cri.go:89] found id: ""
	I1217 20:37:55.260469  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.260477  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:55.260482  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:55.260542  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:55.285381  528764 cri.go:89] found id: ""
	I1217 20:37:55.285396  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.285404  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:55.285409  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:55.285464  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:55.311167  528764 cri.go:89] found id: ""
	I1217 20:37:55.311181  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.311188  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:55.311194  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:55.311266  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:55.336553  528764 cri.go:89] found id: ""
	I1217 20:37:55.336568  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.336575  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:55.336580  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:55.336636  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:55.362555  528764 cri.go:89] found id: ""
	I1217 20:37:55.362569  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.362576  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:55.362582  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:55.362636  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:55.392446  528764 cri.go:89] found id: ""
	I1217 20:37:55.392460  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.392468  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:55.392473  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:55.392529  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:55.421227  528764 cri.go:89] found id: ""
	I1217 20:37:55.421242  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.421250  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:55.421257  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:55.421267  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:55.452467  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:55.452485  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:55.520333  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:55.520354  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:55.535397  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:55.535423  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:55.600267  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:55.591521   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.592108   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.593636   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.594292   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.596071   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:55.591521   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.592108   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.593636   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.594292   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.596071   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:55.600278  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:55.600290  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:58.172840  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:58.183231  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:58.183290  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:58.207527  528764 cri.go:89] found id: ""
	I1217 20:37:58.207541  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.207548  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:58.207553  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:58.207649  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:58.232533  528764 cri.go:89] found id: ""
	I1217 20:37:58.232547  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.232555  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:58.232559  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:58.232613  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:58.257969  528764 cri.go:89] found id: ""
	I1217 20:37:58.257983  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.257990  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:58.257996  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:58.258051  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:58.283047  528764 cri.go:89] found id: ""
	I1217 20:37:58.283060  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.283067  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:58.283072  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:58.283126  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:58.308494  528764 cri.go:89] found id: ""
	I1217 20:37:58.308508  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.308515  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:58.308521  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:58.308578  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:58.333008  528764 cri.go:89] found id: ""
	I1217 20:37:58.333022  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.333029  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:58.333035  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:58.333087  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:58.363097  528764 cri.go:89] found id: ""
	I1217 20:37:58.363111  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.363118  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:58.363126  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:58.363145  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:58.428415  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:58.419398   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.420110   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.421854   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.422467   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.424133   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:58.419398   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.420110   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.421854   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.422467   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.424133   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:58.428426  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:58.428437  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:58.497159  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:58.497179  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:58.528904  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:58.528921  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:58.594783  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:58.594803  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:01.111545  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:01.123462  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:01.123520  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:01.152472  528764 cri.go:89] found id: ""
	I1217 20:38:01.152487  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.152494  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:01.152499  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:01.152561  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:01.178899  528764 cri.go:89] found id: ""
	I1217 20:38:01.178913  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.178921  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:01.178926  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:01.178983  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:01.206687  528764 cri.go:89] found id: ""
	I1217 20:38:01.206701  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.206709  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:01.206714  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:01.206771  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:01.232497  528764 cri.go:89] found id: ""
	I1217 20:38:01.232511  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.232519  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:01.232524  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:01.232579  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:01.261011  528764 cri.go:89] found id: ""
	I1217 20:38:01.261025  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.261032  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:01.261037  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:01.261098  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:01.286117  528764 cri.go:89] found id: ""
	I1217 20:38:01.286132  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.286150  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:01.286156  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:01.286222  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:01.312040  528764 cri.go:89] found id: ""
	I1217 20:38:01.312055  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.312062  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:01.312069  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:01.312080  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:01.382670  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:01.382692  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:01.414378  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:01.414394  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:01.482999  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:01.483019  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:01.497972  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:01.497987  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:01.566351  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:01.557988   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.558761   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.560515   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.561020   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.562479   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:01.557988   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.558761   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.560515   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.561020   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.562479   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:04.066612  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:04.079947  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:04.080010  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:04.114202  528764 cri.go:89] found id: ""
	I1217 20:38:04.114216  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.114223  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:04.114228  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:04.114294  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:04.144225  528764 cri.go:89] found id: ""
	I1217 20:38:04.144238  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.144246  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:04.144250  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:04.144306  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:04.174041  528764 cri.go:89] found id: ""
	I1217 20:38:04.174055  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.174066  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:04.174072  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:04.174138  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:04.198282  528764 cri.go:89] found id: ""
	I1217 20:38:04.198296  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.198304  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:04.198309  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:04.198381  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:04.223855  528764 cri.go:89] found id: ""
	I1217 20:38:04.223869  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.223888  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:04.223897  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:04.223965  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:04.249576  528764 cri.go:89] found id: ""
	I1217 20:38:04.249592  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.249599  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:04.249604  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:04.249667  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:04.278330  528764 cri.go:89] found id: ""
	I1217 20:38:04.278344  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.278351  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:04.278359  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:04.278369  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:04.346075  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:04.346098  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:04.379272  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:04.379287  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:04.446775  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:04.446795  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:04.461788  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:04.461804  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:04.526831  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:04.519073   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.519534   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.520989   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.521420   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.522964   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:04.519073   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.519534   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.520989   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.521420   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.522964   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:07.028018  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:07.038329  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:07.038394  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:07.070882  528764 cri.go:89] found id: ""
	I1217 20:38:07.070911  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.070919  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:07.070925  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:07.070991  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:07.104836  528764 cri.go:89] found id: ""
	I1217 20:38:07.104850  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.104857  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:07.104863  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:07.104932  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:07.141894  528764 cri.go:89] found id: ""
	I1217 20:38:07.141908  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.141916  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:07.141921  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:07.141990  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:07.169039  528764 cri.go:89] found id: ""
	I1217 20:38:07.169053  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.169061  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:07.169066  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:07.169123  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:07.194478  528764 cri.go:89] found id: ""
	I1217 20:38:07.194501  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.194509  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:07.194514  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:07.194579  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:07.219609  528764 cri.go:89] found id: ""
	I1217 20:38:07.219624  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.219632  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:07.219638  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:07.219705  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:07.243819  528764 cri.go:89] found id: ""
	I1217 20:38:07.243832  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.243840  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:07.243847  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:07.243857  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:07.311464  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:07.311483  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:07.343698  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:07.343751  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:07.410312  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:07.410332  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:07.424918  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:07.424934  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:07.487872  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:07.479467   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.480073   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.481799   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.482374   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.484022   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:07.479467   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.480073   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.481799   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.482374   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.484022   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:09.989569  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:10.015377  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:10.015448  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:10.044563  528764 cri.go:89] found id: ""
	I1217 20:38:10.044582  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.044590  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:10.044596  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:10.044659  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:10.082544  528764 cri.go:89] found id: ""
	I1217 20:38:10.082572  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.082579  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:10.082585  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:10.082655  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:10.111998  528764 cri.go:89] found id: ""
	I1217 20:38:10.112021  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.112028  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:10.112034  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:10.112090  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:10.143847  528764 cri.go:89] found id: ""
	I1217 20:38:10.143875  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.143883  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:10.143888  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:10.143959  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:10.169935  528764 cri.go:89] found id: ""
	I1217 20:38:10.169948  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.169956  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:10.169961  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:10.170035  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:10.199354  528764 cri.go:89] found id: ""
	I1217 20:38:10.199367  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.199389  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:10.199395  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:10.199469  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:10.224921  528764 cri.go:89] found id: ""
	I1217 20:38:10.224934  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.224942  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:10.224950  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:10.224961  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:10.292927  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:10.292947  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:10.321993  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:10.322010  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:10.388855  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:10.388876  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:10.404211  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:10.404228  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:10.466886  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:10.458803   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.459494   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.461201   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.461646   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.463180   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:10.458803   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.459494   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.461201   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.461646   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.463180   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:12.968194  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:12.978084  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:12.978143  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:13.006691  528764 cri.go:89] found id: ""
	I1217 20:38:13.006706  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.006713  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:13.006719  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:13.006779  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:13.032773  528764 cri.go:89] found id: ""
	I1217 20:38:13.032787  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.032795  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:13.032800  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:13.032854  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:13.059128  528764 cri.go:89] found id: ""
	I1217 20:38:13.059142  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.059150  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:13.059155  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:13.059213  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:13.093983  528764 cri.go:89] found id: ""
	I1217 20:38:13.093997  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.094005  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:13.094010  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:13.094066  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:13.136453  528764 cri.go:89] found id: ""
	I1217 20:38:13.136467  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.136474  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:13.136481  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:13.136536  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:13.166382  528764 cri.go:89] found id: ""
	I1217 20:38:13.166396  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.166403  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:13.166409  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:13.166476  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:13.194638  528764 cri.go:89] found id: ""
	I1217 20:38:13.194651  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.194658  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:13.194666  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:13.194689  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:13.261344  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:13.261362  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:13.276057  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:13.276073  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:13.341759  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:13.333469   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.334136   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.335845   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.336372   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.337969   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:13.333469   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.334136   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.335845   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.336372   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.337969   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:13.341769  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:13.341780  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:13.412593  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:13.412613  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:15.945731  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:15.956026  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:15.956085  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:15.980875  528764 cri.go:89] found id: ""
	I1217 20:38:15.980889  528764 logs.go:282] 0 containers: []
	W1217 20:38:15.980897  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:15.980902  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:15.980956  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:16.017238  528764 cri.go:89] found id: ""
	I1217 20:38:16.017253  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.017260  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:16.017265  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:16.017327  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:16.042662  528764 cri.go:89] found id: ""
	I1217 20:38:16.042676  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.042684  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:16.042700  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:16.042759  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:16.070239  528764 cri.go:89] found id: ""
	I1217 20:38:16.070253  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.070265  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:16.070281  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:16.070344  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:16.101763  528764 cri.go:89] found id: ""
	I1217 20:38:16.101777  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.101785  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:16.101802  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:16.101863  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:16.132808  528764 cri.go:89] found id: ""
	I1217 20:38:16.132822  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.132830  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:16.132835  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:16.132904  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:16.162901  528764 cri.go:89] found id: ""
	I1217 20:38:16.162925  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.162932  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:16.162940  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:16.162951  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:16.177475  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:16.177491  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:16.239620  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:16.231437   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.232280   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.233889   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.234228   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.235746   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:16.231437   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.232280   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.233889   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.234228   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.235746   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:16.239630  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:16.239641  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:16.306695  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:16.306714  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:16.338739  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:16.338754  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:18.906627  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:18.916877  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:18.916940  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:18.940995  528764 cri.go:89] found id: ""
	I1217 20:38:18.941009  528764 logs.go:282] 0 containers: []
	W1217 20:38:18.941016  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:18.941022  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:18.941090  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:18.967366  528764 cri.go:89] found id: ""
	I1217 20:38:18.967381  528764 logs.go:282] 0 containers: []
	W1217 20:38:18.967388  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:18.967393  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:18.967448  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:18.993265  528764 cri.go:89] found id: ""
	I1217 20:38:18.993279  528764 logs.go:282] 0 containers: []
	W1217 20:38:18.993286  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:18.993291  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:18.993345  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:19.020582  528764 cri.go:89] found id: ""
	I1217 20:38:19.020595  528764 logs.go:282] 0 containers: []
	W1217 20:38:19.020603  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:19.020608  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:19.020666  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:19.045982  528764 cri.go:89] found id: ""
	I1217 20:38:19.045996  528764 logs.go:282] 0 containers: []
	W1217 20:38:19.046005  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:19.046010  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:19.046069  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:19.073910  528764 cri.go:89] found id: ""
	I1217 20:38:19.073923  528764 logs.go:282] 0 containers: []
	W1217 20:38:19.073930  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:19.073936  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:19.073992  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:19.113478  528764 cri.go:89] found id: ""
	I1217 20:38:19.113491  528764 logs.go:282] 0 containers: []
	W1217 20:38:19.113499  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:19.113507  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:19.113517  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:19.181345  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:19.181364  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:19.196831  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:19.196848  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:19.262885  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:19.253623   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.254429   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.256066   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.256658   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.258445   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:19.253623   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.254429   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.256066   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.256658   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.258445   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:19.262896  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:19.262907  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:19.332927  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:19.332947  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:21.863218  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:21.873488  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:21.873552  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:21.901892  528764 cri.go:89] found id: ""
	I1217 20:38:21.901907  528764 logs.go:282] 0 containers: []
	W1217 20:38:21.901915  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:21.901930  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:21.901988  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:21.928067  528764 cri.go:89] found id: ""
	I1217 20:38:21.928080  528764 logs.go:282] 0 containers: []
	W1217 20:38:21.928087  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:21.928092  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:21.928149  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:21.953356  528764 cri.go:89] found id: ""
	I1217 20:38:21.953371  528764 logs.go:282] 0 containers: []
	W1217 20:38:21.953378  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:21.953383  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:21.953444  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:21.987415  528764 cri.go:89] found id: ""
	I1217 20:38:21.987428  528764 logs.go:282] 0 containers: []
	W1217 20:38:21.987436  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:21.987442  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:21.987509  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:22.016922  528764 cri.go:89] found id: ""
	I1217 20:38:22.016937  528764 logs.go:282] 0 containers: []
	W1217 20:38:22.016945  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:22.016951  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:22.017009  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:22.044463  528764 cri.go:89] found id: ""
	I1217 20:38:22.044477  528764 logs.go:282] 0 containers: []
	W1217 20:38:22.044484  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:22.044490  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:22.044545  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:22.072815  528764 cri.go:89] found id: ""
	I1217 20:38:22.072828  528764 logs.go:282] 0 containers: []
	W1217 20:38:22.072836  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:22.072844  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:22.072854  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:22.106754  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:22.106778  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:22.177000  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:22.177019  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:22.191928  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:22.191945  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:22.254841  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:22.246562   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.247341   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.249143   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.249615   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.251134   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:22.246562   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.247341   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.249143   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.249615   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.251134   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:22.254851  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:22.254862  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:24.826532  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:24.836772  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:24.836836  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:24.862693  528764 cri.go:89] found id: ""
	I1217 20:38:24.862706  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.862714  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:24.862719  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:24.862789  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:24.887641  528764 cri.go:89] found id: ""
	I1217 20:38:24.887656  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.887663  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:24.887668  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:24.887737  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:24.913131  528764 cri.go:89] found id: ""
	I1217 20:38:24.913145  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.913168  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:24.913174  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:24.913242  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:24.939734  528764 cri.go:89] found id: ""
	I1217 20:38:24.939748  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.939755  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:24.939760  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:24.939815  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:24.964904  528764 cri.go:89] found id: ""
	I1217 20:38:24.964919  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.964925  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:24.964930  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:24.964988  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:24.990333  528764 cri.go:89] found id: ""
	I1217 20:38:24.990348  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.990355  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:24.990361  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:24.990421  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:25.019872  528764 cri.go:89] found id: ""
	I1217 20:38:25.019887  528764 logs.go:282] 0 containers: []
	W1217 20:38:25.019895  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:25.019902  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:25.019914  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:25.036413  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:25.036438  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:25.112619  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:25.103911   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.104770   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.106472   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.107045   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.108652   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:25.103911   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.104770   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.106472   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.107045   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.108652   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:25.112632  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:25.112642  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:25.184378  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:25.184399  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:25.216673  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:25.216689  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:27.785567  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:27.796326  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:27.796391  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:27.825782  528764 cri.go:89] found id: ""
	I1217 20:38:27.825796  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.825804  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:27.825809  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:27.825864  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:27.850601  528764 cri.go:89] found id: ""
	I1217 20:38:27.850614  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.850627  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:27.850632  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:27.850700  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:27.876056  528764 cri.go:89] found id: ""
	I1217 20:38:27.876070  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.876082  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:27.876087  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:27.876151  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:27.901899  528764 cri.go:89] found id: ""
	I1217 20:38:27.901913  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.901920  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:27.901926  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:27.901997  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:27.931527  528764 cri.go:89] found id: ""
	I1217 20:38:27.931541  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.931548  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:27.931553  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:27.931627  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:27.956390  528764 cri.go:89] found id: ""
	I1217 20:38:27.956404  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.956411  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:27.956417  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:27.956473  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:27.985929  528764 cri.go:89] found id: ""
	I1217 20:38:27.985943  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.985951  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:27.985959  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:27.985970  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:28.054474  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:28.054492  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:28.070115  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:28.070132  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:28.151327  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:28.142186   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.142985   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.145194   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.145756   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.147299   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:28.142186   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.142985   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.145194   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.145756   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.147299   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:28.151337  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:28.151347  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:28.220518  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:28.220542  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:30.755166  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:30.765287  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:30.765345  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:30.790103  528764 cri.go:89] found id: ""
	I1217 20:38:30.790117  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.790139  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:30.790145  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:30.790209  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:30.815526  528764 cri.go:89] found id: ""
	I1217 20:38:30.815539  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.815547  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:30.815552  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:30.815647  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:30.841851  528764 cri.go:89] found id: ""
	I1217 20:38:30.841864  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.841884  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:30.841890  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:30.841963  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:30.866784  528764 cri.go:89] found id: ""
	I1217 20:38:30.866798  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.866829  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:30.866834  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:30.866922  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:30.892935  528764 cri.go:89] found id: ""
	I1217 20:38:30.892948  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.892956  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:30.892961  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:30.893017  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:30.918525  528764 cri.go:89] found id: ""
	I1217 20:38:30.918545  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.918552  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:30.918558  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:30.918624  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:30.946571  528764 cri.go:89] found id: ""
	I1217 20:38:30.946586  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.946593  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:30.946600  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:30.946620  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:31.016310  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:31.016330  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:31.031710  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:31.031729  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:31.121622  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:31.112851   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.113732   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.115400   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.115997   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.117664   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:31.112851   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.113732   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.115400   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.115997   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.117664   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:31.121632  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:31.121643  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:31.191069  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:31.191089  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:33.724221  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:33.734488  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:33.734549  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:33.761235  528764 cri.go:89] found id: ""
	I1217 20:38:33.761249  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.761256  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:33.761262  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:33.761322  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:33.787337  528764 cri.go:89] found id: ""
	I1217 20:38:33.787350  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.787358  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:33.787363  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:33.787432  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:33.812684  528764 cri.go:89] found id: ""
	I1217 20:38:33.812706  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.812714  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:33.812719  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:33.812784  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:33.842819  528764 cri.go:89] found id: ""
	I1217 20:38:33.842832  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.842854  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:33.842865  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:33.842929  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:33.868875  528764 cri.go:89] found id: ""
	I1217 20:38:33.868889  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.868897  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:33.868902  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:33.868961  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:33.898309  528764 cri.go:89] found id: ""
	I1217 20:38:33.898323  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.898331  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:33.898356  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:33.898425  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:33.924913  528764 cri.go:89] found id: ""
	I1217 20:38:33.924927  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.924935  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:33.924943  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:33.924957  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:33.990911  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:33.990930  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:34.008276  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:34.008297  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:34.087503  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:34.076899   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.077640   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.079660   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.080396   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.083391   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:34.076899   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.077640   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.079660   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.080396   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.083391   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:34.087514  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:34.087537  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:34.163882  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:34.163901  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:36.694644  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:36.704742  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:36.704803  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:36.730340  528764 cri.go:89] found id: ""
	I1217 20:38:36.730354  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.730363  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:36.730369  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:36.730426  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:36.757473  528764 cri.go:89] found id: ""
	I1217 20:38:36.757486  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.757493  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:36.757499  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:36.757554  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:36.786113  528764 cri.go:89] found id: ""
	I1217 20:38:36.786127  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.786135  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:36.786140  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:36.786246  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:36.812385  528764 cri.go:89] found id: ""
	I1217 20:38:36.812399  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.812407  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:36.812412  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:36.812471  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:36.837075  528764 cri.go:89] found id: ""
	I1217 20:38:36.837088  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.837095  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:36.837100  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:36.837156  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:36.866713  528764 cri.go:89] found id: ""
	I1217 20:38:36.866727  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.866734  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:36.866740  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:36.866808  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:36.896063  528764 cri.go:89] found id: ""
	I1217 20:38:36.896078  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.896085  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:36.896093  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:36.896106  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:36.961772  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:36.961793  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:36.976619  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:36.976637  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:37.049152  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:37.040423   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.041223   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.042928   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.043675   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.045309   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:37.040423   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.041223   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.042928   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.043675   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.045309   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:37.049163  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:37.049174  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:37.119769  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:37.119788  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:39.651068  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:39.661185  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:39.661251  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:39.686602  528764 cri.go:89] found id: ""
	I1217 20:38:39.686616  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.686623  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:39.686628  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:39.686685  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:39.711563  528764 cri.go:89] found id: ""
	I1217 20:38:39.711577  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.711602  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:39.711608  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:39.711674  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:39.738013  528764 cri.go:89] found id: ""
	I1217 20:38:39.738027  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.738034  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:39.738039  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:39.738094  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:39.763309  528764 cri.go:89] found id: ""
	I1217 20:38:39.763323  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.763330  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:39.763336  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:39.763396  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:39.788615  528764 cri.go:89] found id: ""
	I1217 20:38:39.788628  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.788640  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:39.788645  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:39.788701  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:39.813921  528764 cri.go:89] found id: ""
	I1217 20:38:39.813935  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.813942  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:39.813948  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:39.814006  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:39.843230  528764 cri.go:89] found id: ""
	I1217 20:38:39.843244  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.843252  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:39.843260  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:39.843271  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:39.857938  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:39.857954  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:39.921708  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:39.913994   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.914417   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.915990   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.916326   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.917797   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:39.913994   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.914417   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.915990   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.916326   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.917797   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:39.921717  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:39.921730  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:39.992421  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:39.992444  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:40.032432  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:40.032451  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:42.605010  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:42.614872  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:42.614934  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:42.639899  528764 cri.go:89] found id: ""
	I1217 20:38:42.639913  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.639920  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:42.639926  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:42.639996  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:42.670021  528764 cri.go:89] found id: ""
	I1217 20:38:42.670036  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.670049  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:42.670055  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:42.670116  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:42.696223  528764 cri.go:89] found id: ""
	I1217 20:38:42.696237  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.696244  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:42.696251  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:42.696310  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:42.722579  528764 cri.go:89] found id: ""
	I1217 20:38:42.722593  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.722606  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:42.722612  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:42.722668  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:42.747677  528764 cri.go:89] found id: ""
	I1217 20:38:42.747690  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.747698  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:42.747703  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:42.747764  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:42.774015  528764 cri.go:89] found id: ""
	I1217 20:38:42.774029  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.774036  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:42.774053  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:42.774112  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:42.799502  528764 cri.go:89] found id: ""
	I1217 20:38:42.799516  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.799525  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:42.799533  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:42.799543  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:42.865035  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:42.865058  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:42.880616  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:42.880633  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:42.949493  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:42.939951   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.940704   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.942455   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.943033   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.944768   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:42.939951   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.940704   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.942455   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.943033   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.944768   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:42.949505  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:42.949528  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:43.019292  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:43.019312  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:45.548705  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:45.558968  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:45.559027  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:45.583967  528764 cri.go:89] found id: ""
	I1217 20:38:45.583982  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.583989  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:45.583994  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:45.584050  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:45.609420  528764 cri.go:89] found id: ""
	I1217 20:38:45.609434  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.609441  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:45.609447  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:45.609508  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:45.640522  528764 cri.go:89] found id: ""
	I1217 20:38:45.640546  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.640554  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:45.640559  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:45.640625  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:45.666349  528764 cri.go:89] found id: ""
	I1217 20:38:45.666362  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.666369  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:45.666375  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:45.666432  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:45.696168  528764 cri.go:89] found id: ""
	I1217 20:38:45.696182  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.696189  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:45.696194  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:45.696255  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:45.719763  528764 cri.go:89] found id: ""
	I1217 20:38:45.719777  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.719784  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:45.719790  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:45.719847  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:45.744391  528764 cri.go:89] found id: ""
	I1217 20:38:45.744405  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.744412  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:45.744421  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:45.744451  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:45.809635  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:45.809656  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:45.824260  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:45.824275  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:45.887725  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:45.879670   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.880327   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.881887   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.882340   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.883862   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:45.879670   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.880327   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.881887   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.882340   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.883862   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:45.887735  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:45.887746  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:45.955422  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:45.955441  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:48.485624  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:48.495313  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:48.495374  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:48.520059  528764 cri.go:89] found id: ""
	I1217 20:38:48.520074  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.520081  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:48.520087  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:48.520143  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:48.545655  528764 cri.go:89] found id: ""
	I1217 20:38:48.545670  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.545677  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:48.545682  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:48.545740  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:48.570521  528764 cri.go:89] found id: ""
	I1217 20:38:48.570535  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.570543  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:48.570548  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:48.570606  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:48.596861  528764 cri.go:89] found id: ""
	I1217 20:38:48.596875  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.596883  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:48.596888  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:48.596946  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:48.623093  528764 cri.go:89] found id: ""
	I1217 20:38:48.623115  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.623123  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:48.623128  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:48.623203  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:48.648854  528764 cri.go:89] found id: ""
	I1217 20:38:48.648868  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.648876  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:48.648881  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:48.648953  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:48.673887  528764 cri.go:89] found id: ""
	I1217 20:38:48.673911  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.673919  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:48.673928  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:48.673939  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:48.739985  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:48.740004  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:48.754655  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:48.754672  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:48.818714  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:48.810661   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.811171   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.812860   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.813319   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.814815   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:48.810661   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.811171   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.812860   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.813319   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.814815   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:48.818724  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:48.818734  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:48.889255  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:48.889281  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:51.421767  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:51.432066  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:51.432137  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:51.461100  528764 cri.go:89] found id: ""
	I1217 20:38:51.461115  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.461123  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:51.461132  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:51.461205  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:51.493482  528764 cri.go:89] found id: ""
	I1217 20:38:51.493495  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.493503  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:51.493508  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:51.493573  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:51.523360  528764 cri.go:89] found id: ""
	I1217 20:38:51.523374  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.523382  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:51.523387  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:51.523443  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:51.549129  528764 cri.go:89] found id: ""
	I1217 20:38:51.549143  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.549151  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:51.549156  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:51.549212  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:51.575573  528764 cri.go:89] found id: ""
	I1217 20:38:51.575613  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.575621  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:51.575631  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:51.575698  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:51.601059  528764 cri.go:89] found id: ""
	I1217 20:38:51.601074  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.601081  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:51.601087  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:51.601153  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:51.626446  528764 cri.go:89] found id: ""
	I1217 20:38:51.626461  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.626468  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:51.626476  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:51.626487  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:51.693973  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:51.693993  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:51.724023  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:51.724039  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:51.788885  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:51.788906  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:51.803552  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:51.803568  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:51.866022  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:51.858220   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.858930   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.860542   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.860857   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.862309   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:51.858220   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.858930   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.860542   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.860857   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.862309   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:54.367685  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:54.378312  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:54.378367  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:54.407726  528764 cri.go:89] found id: ""
	I1217 20:38:54.407744  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.407752  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:54.407758  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:54.407815  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:54.432535  528764 cri.go:89] found id: ""
	I1217 20:38:54.432550  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.432557  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:54.432562  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:54.432623  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:54.458438  528764 cri.go:89] found id: ""
	I1217 20:38:54.458453  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.458460  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:54.458465  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:54.458527  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:54.487170  528764 cri.go:89] found id: ""
	I1217 20:38:54.487184  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.487191  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:54.487198  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:54.487254  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:54.512876  528764 cri.go:89] found id: ""
	I1217 20:38:54.512890  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.512897  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:54.512902  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:54.512959  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:54.537031  528764 cri.go:89] found id: ""
	I1217 20:38:54.537044  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.537051  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:54.537056  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:54.537112  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:54.562349  528764 cri.go:89] found id: ""
	I1217 20:38:54.562363  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.562387  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:54.562396  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:54.562406  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:54.628118  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:54.628137  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:54.642915  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:54.642932  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:54.707130  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:54.699152   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.699635   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.701269   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.701677   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.703119   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:54.699152   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.699635   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.701269   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.701677   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.703119   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:54.707141  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:54.707152  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:54.775317  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:54.775338  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:57.310952  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:57.322922  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:57.322983  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:57.357392  528764 cri.go:89] found id: ""
	I1217 20:38:57.357406  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.357413  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:57.357420  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:57.357476  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:57.384349  528764 cri.go:89] found id: ""
	I1217 20:38:57.384363  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.384373  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:57.384378  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:57.384434  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:57.412576  528764 cri.go:89] found id: ""
	I1217 20:38:57.412590  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.412598  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:57.412603  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:57.412662  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:57.439190  528764 cri.go:89] found id: ""
	I1217 20:38:57.439205  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.439212  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:57.439217  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:57.439305  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:57.466239  528764 cri.go:89] found id: ""
	I1217 20:38:57.466253  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.466262  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:57.466267  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:57.466324  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:57.491495  528764 cri.go:89] found id: ""
	I1217 20:38:57.491508  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.491516  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:57.491522  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:57.491597  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:57.517009  528764 cri.go:89] found id: ""
	I1217 20:38:57.517023  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.517030  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:57.517038  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:57.517048  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:57.582648  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:57.582669  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:57.597231  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:57.597249  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:57.663163  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:57.654987   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.655397   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.656981   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.657561   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.659204   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:57.654987   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.655397   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.656981   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.657561   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.659204   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:57.663174  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:57.663186  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:57.735126  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:57.735151  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:00.265877  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:00.292750  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:00.292841  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:00.342493  528764 cri.go:89] found id: ""
	I1217 20:39:00.342529  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.342553  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:00.342560  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:00.342673  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:00.389833  528764 cri.go:89] found id: ""
	I1217 20:39:00.389858  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.389866  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:00.389871  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:00.389943  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:00.427417  528764 cri.go:89] found id: ""
	I1217 20:39:00.427442  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.427450  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:00.427455  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:00.427525  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:00.455698  528764 cri.go:89] found id: ""
	I1217 20:39:00.455712  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.455720  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:00.455726  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:00.455784  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:00.487535  528764 cri.go:89] found id: ""
	I1217 20:39:00.487551  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.487558  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:00.487576  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:00.487666  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:00.514228  528764 cri.go:89] found id: ""
	I1217 20:39:00.514243  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.514251  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:00.514256  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:00.514315  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:00.540536  528764 cri.go:89] found id: ""
	I1217 20:39:00.540561  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.540569  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:00.540576  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:00.540586  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:00.607064  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:00.607084  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:00.639882  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:00.639899  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:00.705607  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:00.705629  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:00.721491  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:00.721506  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:00.784593  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:00.776120   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.776725   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.778453   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.778972   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.780702   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:00.776120   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.776725   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.778453   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.778972   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.780702   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:03.284822  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:03.295036  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:03.295097  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:03.333750  528764 cri.go:89] found id: ""
	I1217 20:39:03.333778  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.333786  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:03.333792  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:03.333861  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:03.363983  528764 cri.go:89] found id: ""
	I1217 20:39:03.363997  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.364004  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:03.364024  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:03.364082  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:03.392963  528764 cri.go:89] found id: ""
	I1217 20:39:03.392977  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.392984  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:03.392989  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:03.393044  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:03.419023  528764 cri.go:89] found id: ""
	I1217 20:39:03.419039  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.419046  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:03.419052  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:03.419108  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:03.444813  528764 cri.go:89] found id: ""
	I1217 20:39:03.444826  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.444833  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:03.444838  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:03.444895  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:03.468964  528764 cri.go:89] found id: ""
	I1217 20:39:03.468978  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.468986  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:03.468996  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:03.469053  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:03.494050  528764 cri.go:89] found id: ""
	I1217 20:39:03.494063  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.494071  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:03.494078  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:03.494087  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:03.559830  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:03.559849  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:03.575390  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:03.575407  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:03.642132  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:03.634093   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.634724   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.636305   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.636854   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.638302   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:03.634093   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.634724   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.636305   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.636854   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.638302   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:03.642142  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:03.642153  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:03.710317  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:03.710339  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:06.242034  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:06.252695  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:06.252759  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:06.278446  528764 cri.go:89] found id: ""
	I1217 20:39:06.278460  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.278467  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:06.278477  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:06.278573  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:06.304597  528764 cri.go:89] found id: ""
	I1217 20:39:06.304612  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.304620  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:06.304630  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:06.304702  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:06.345678  528764 cri.go:89] found id: ""
	I1217 20:39:06.345693  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.345700  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:06.345706  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:06.345764  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:06.381455  528764 cri.go:89] found id: ""
	I1217 20:39:06.381469  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.381476  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:06.381482  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:06.381542  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:06.410677  528764 cri.go:89] found id: ""
	I1217 20:39:06.410691  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.410698  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:06.410704  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:06.410774  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:06.436535  528764 cri.go:89] found id: ""
	I1217 20:39:06.436549  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.436556  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:06.436564  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:06.436621  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:06.467306  528764 cri.go:89] found id: ""
	I1217 20:39:06.467320  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.467327  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:06.467335  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:06.467345  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:06.533557  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:06.533577  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:06.548883  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:06.548901  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:06.613032  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:06.604590   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.605314   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.606990   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.607539   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.609092   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:06.604590   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.605314   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.606990   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.607539   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.609092   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:06.613048  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:06.613068  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:06.682237  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:06.682258  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:09.211382  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:09.221300  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:09.221359  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:09.246764  528764 cri.go:89] found id: ""
	I1217 20:39:09.246778  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.246785  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:09.246790  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:09.246867  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:09.271248  528764 cri.go:89] found id: ""
	I1217 20:39:09.271261  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.271268  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:09.271273  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:09.271343  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:09.296093  528764 cri.go:89] found id: ""
	I1217 20:39:09.296107  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.296114  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:09.296120  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:09.296175  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:09.325215  528764 cri.go:89] found id: ""
	I1217 20:39:09.325230  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.325236  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:09.325241  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:09.325304  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:09.352141  528764 cri.go:89] found id: ""
	I1217 20:39:09.352155  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.352162  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:09.352167  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:09.352237  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:09.383006  528764 cri.go:89] found id: ""
	I1217 20:39:09.383021  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.383028  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:09.383034  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:09.383113  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:09.414504  528764 cri.go:89] found id: ""
	I1217 20:39:09.414518  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.414526  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:09.414534  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:09.414566  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:09.483870  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:09.483889  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:09.498851  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:09.498867  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:09.569431  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:09.561559   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.562122   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.563640   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.564216   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.565635   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:09.561559   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.562122   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.563640   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.564216   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.565635   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:09.569442  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:09.569452  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:09.636946  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:09.636966  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:12.165906  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:12.176117  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:12.176184  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:12.202030  528764 cri.go:89] found id: ""
	I1217 20:39:12.202043  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.202051  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:12.202056  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:12.202111  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:12.230473  528764 cri.go:89] found id: ""
	I1217 20:39:12.230487  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.230495  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:12.230500  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:12.230559  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:12.256663  528764 cri.go:89] found id: ""
	I1217 20:39:12.256677  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.256685  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:12.256690  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:12.256747  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:12.284083  528764 cri.go:89] found id: ""
	I1217 20:39:12.284096  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.284104  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:12.284109  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:12.284168  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:12.309047  528764 cri.go:89] found id: ""
	I1217 20:39:12.309062  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.309070  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:12.309075  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:12.309134  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:12.351942  528764 cri.go:89] found id: ""
	I1217 20:39:12.351957  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.351969  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:12.351975  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:12.352034  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:12.390734  528764 cri.go:89] found id: ""
	I1217 20:39:12.390765  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.390773  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:12.390782  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:12.390793  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:12.456083  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:12.456103  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:12.471218  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:12.471239  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:12.538690  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:12.527207   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.527702   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.529370   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.529843   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.531751   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:12.527207   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.527702   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.529370   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.529843   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.531751   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:12.538707  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:12.538718  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:12.605751  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:12.605772  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:15.135835  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:15.146221  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:15.146280  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:15.176272  528764 cri.go:89] found id: ""
	I1217 20:39:15.176286  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.176294  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:15.176301  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:15.176357  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:15.206452  528764 cri.go:89] found id: ""
	I1217 20:39:15.206466  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.206474  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:15.206479  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:15.206548  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:15.231899  528764 cri.go:89] found id: ""
	I1217 20:39:15.231914  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.231921  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:15.231927  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:15.231996  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:15.257093  528764 cri.go:89] found id: ""
	I1217 20:39:15.257106  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.257113  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:15.257119  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:15.257174  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:15.281692  528764 cri.go:89] found id: ""
	I1217 20:39:15.281706  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.281714  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:15.281719  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:15.281777  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:15.310093  528764 cri.go:89] found id: ""
	I1217 20:39:15.310107  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.310114  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:15.310119  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:15.310193  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:15.349800  528764 cri.go:89] found id: ""
	I1217 20:39:15.349813  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.349830  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:15.349839  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:15.349850  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:15.426883  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:15.426904  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:15.442044  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:15.442059  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:15.512531  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:15.503665   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.504418   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.506067   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.506537   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.508077   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:15.503665   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.504418   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.506067   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.506537   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.508077   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:15.512542  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:15.512554  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:15.587396  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:15.587422  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:18.121184  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:18.131563  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:18.131644  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:18.157091  528764 cri.go:89] found id: ""
	I1217 20:39:18.157105  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.157113  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:18.157118  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:18.157177  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:18.183414  528764 cri.go:89] found id: ""
	I1217 20:39:18.183428  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.183452  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:18.183457  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:18.183523  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:18.210558  528764 cri.go:89] found id: ""
	I1217 20:39:18.210586  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.210595  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:18.210600  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:18.210667  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:18.236623  528764 cri.go:89] found id: ""
	I1217 20:39:18.236653  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.236661  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:18.236666  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:18.236730  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:18.263889  528764 cri.go:89] found id: ""
	I1217 20:39:18.263903  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.263911  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:18.263916  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:18.263977  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:18.289661  528764 cri.go:89] found id: ""
	I1217 20:39:18.289675  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.289683  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:18.289688  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:18.289743  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:18.314115  528764 cri.go:89] found id: ""
	I1217 20:39:18.314129  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.314136  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:18.314143  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:18.314165  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:18.382890  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:18.382909  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:18.425251  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:18.425268  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:18.493317  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:18.493336  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:18.509454  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:18.509470  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:18.571731  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:18.563250   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.563867   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.564819   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.566275   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.566732   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:18.563250   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.563867   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.564819   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.566275   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.566732   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:21.073445  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:21.083815  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:21.083874  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:21.113281  528764 cri.go:89] found id: ""
	I1217 20:39:21.113295  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.113302  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:21.113307  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:21.113365  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:21.142024  528764 cri.go:89] found id: ""
	I1217 20:39:21.142039  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.142046  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:21.142059  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:21.142123  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:21.170658  528764 cri.go:89] found id: ""
	I1217 20:39:21.170678  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.170686  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:21.170691  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:21.170756  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:21.196194  528764 cri.go:89] found id: ""
	I1217 20:39:21.196207  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.196214  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:21.196220  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:21.196277  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:21.222255  528764 cri.go:89] found id: ""
	I1217 20:39:21.222269  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.222276  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:21.222282  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:21.222355  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:21.247912  528764 cri.go:89] found id: ""
	I1217 20:39:21.247926  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.247933  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:21.247939  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:21.247996  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:21.278136  528764 cri.go:89] found id: ""
	I1217 20:39:21.278151  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.278158  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:21.278175  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:21.278187  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:21.346881  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:21.346899  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:21.363101  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:21.363117  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:21.431000  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:21.421965   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.422735   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.424535   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.425238   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.426896   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:21.421965   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.422735   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.424535   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.425238   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.426896   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:21.431011  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:21.431024  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:21.499494  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:21.499512  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:24.028859  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:24.039467  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:24.039528  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:24.065108  528764 cri.go:89] found id: ""
	I1217 20:39:24.065122  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.065130  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:24.065135  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:24.065193  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:24.090624  528764 cri.go:89] found id: ""
	I1217 20:39:24.090638  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.090647  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:24.090652  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:24.090710  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:24.116315  528764 cri.go:89] found id: ""
	I1217 20:39:24.116331  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.116339  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:24.116345  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:24.116414  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:24.141792  528764 cri.go:89] found id: ""
	I1217 20:39:24.141806  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.141813  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:24.141818  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:24.141877  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:24.170297  528764 cri.go:89] found id: ""
	I1217 20:39:24.170310  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.170318  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:24.170324  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:24.170378  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:24.199383  528764 cri.go:89] found id: ""
	I1217 20:39:24.199397  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.199404  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:24.199411  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:24.199477  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:24.224443  528764 cri.go:89] found id: ""
	I1217 20:39:24.224457  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.224464  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:24.224471  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:24.224496  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:24.253379  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:24.253396  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:24.322404  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:24.322423  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:24.340551  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:24.340569  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:24.409290  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:24.400697   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.401399   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.402586   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.403250   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.404963   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:24.400697   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.401399   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.402586   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.403250   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.404963   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:24.409305  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:24.409316  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:26.976820  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:26.986804  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:26.986885  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:27.015438  528764 cri.go:89] found id: ""
	I1217 20:39:27.015453  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.015460  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:27.015466  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:27.015545  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:27.041591  528764 cri.go:89] found id: ""
	I1217 20:39:27.041605  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.041613  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:27.041619  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:27.041680  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:27.066798  528764 cri.go:89] found id: ""
	I1217 20:39:27.066812  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.066819  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:27.066851  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:27.066908  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:27.091716  528764 cri.go:89] found id: ""
	I1217 20:39:27.091730  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.091737  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:27.091743  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:27.091797  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:27.116523  528764 cri.go:89] found id: ""
	I1217 20:39:27.116536  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.116544  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:27.116550  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:27.116612  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:27.140982  528764 cri.go:89] found id: ""
	I1217 20:39:27.140996  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.141004  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:27.141009  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:27.141064  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:27.170754  528764 cri.go:89] found id: ""
	I1217 20:39:27.170769  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.170777  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:27.170784  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:27.170805  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:27.234403  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:27.226295   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.226681   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.228247   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.228832   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.230386   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:27.226295   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.226681   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.228247   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.228832   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.230386   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:27.234413  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:27.234463  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:27.306551  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:27.306570  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:27.342575  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:27.342597  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:27.416305  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:27.416325  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:29.931568  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:29.941696  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:29.941790  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:29.970561  528764 cri.go:89] found id: ""
	I1217 20:39:29.970576  528764 logs.go:282] 0 containers: []
	W1217 20:39:29.970583  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:29.970588  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:29.970644  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:29.995538  528764 cri.go:89] found id: ""
	I1217 20:39:29.995551  528764 logs.go:282] 0 containers: []
	W1217 20:39:29.995559  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:29.995564  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:29.995645  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:30.047472  528764 cri.go:89] found id: ""
	I1217 20:39:30.047487  528764 logs.go:282] 0 containers: []
	W1217 20:39:30.047496  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:30.047501  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:30.047568  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:30.077580  528764 cri.go:89] found id: ""
	I1217 20:39:30.077595  528764 logs.go:282] 0 containers: []
	W1217 20:39:30.077603  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:30.077609  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:30.077686  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:30.111544  528764 cri.go:89] found id: ""
	I1217 20:39:30.111574  528764 logs.go:282] 0 containers: []
	W1217 20:39:30.111618  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:30.111624  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:30.111705  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:30.139478  528764 cri.go:89] found id: ""
	I1217 20:39:30.139504  528764 logs.go:282] 0 containers: []
	W1217 20:39:30.139513  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:30.139518  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:30.139611  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:30.169107  528764 cri.go:89] found id: ""
	I1217 20:39:30.169121  528764 logs.go:282] 0 containers: []
	W1217 20:39:30.169128  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:30.169136  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:30.169146  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:30.234963  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:30.234982  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:30.250550  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:30.250577  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:30.320870  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:30.310827   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.311705   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.313455   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.314025   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.315795   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:30.310827   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.311705   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.313455   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.314025   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.315795   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:30.320884  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:30.320894  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:30.397776  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:30.397796  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:32.932751  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:32.942813  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:32.942885  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:32.968405  528764 cri.go:89] found id: ""
	I1217 20:39:32.968418  528764 logs.go:282] 0 containers: []
	W1217 20:39:32.968425  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:32.968431  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:32.968503  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:32.991973  528764 cri.go:89] found id: ""
	I1217 20:39:32.991987  528764 logs.go:282] 0 containers: []
	W1217 20:39:32.991994  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:32.992005  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:32.992063  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:33.019478  528764 cri.go:89] found id: ""
	I1217 20:39:33.019492  528764 logs.go:282] 0 containers: []
	W1217 20:39:33.019500  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:33.019505  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:33.019572  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:33.044942  528764 cri.go:89] found id: ""
	I1217 20:39:33.044958  528764 logs.go:282] 0 containers: []
	W1217 20:39:33.044965  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:33.044970  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:33.045028  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:33.072242  528764 cri.go:89] found id: ""
	I1217 20:39:33.072256  528764 logs.go:282] 0 containers: []
	W1217 20:39:33.072263  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:33.072268  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:33.072332  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:33.101598  528764 cri.go:89] found id: ""
	I1217 20:39:33.101611  528764 logs.go:282] 0 containers: []
	W1217 20:39:33.101619  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:33.101624  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:33.101677  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:33.127765  528764 cri.go:89] found id: ""
	I1217 20:39:33.127780  528764 logs.go:282] 0 containers: []
	W1217 20:39:33.127805  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:33.127813  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:33.127830  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:33.193505  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:33.193524  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:33.209404  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:33.209419  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:33.278213  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:33.269512   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.270341   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.272086   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.272605   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.274151   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:33.269512   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.270341   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.272086   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.272605   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.274151   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:33.278224  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:33.278234  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:33.352890  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:33.352911  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:35.892717  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:35.902865  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:35.902923  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:35.927963  528764 cri.go:89] found id: ""
	I1217 20:39:35.927977  528764 logs.go:282] 0 containers: []
	W1217 20:39:35.927985  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:35.927990  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:35.928047  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:35.953995  528764 cri.go:89] found id: ""
	I1217 20:39:35.954010  528764 logs.go:282] 0 containers: []
	W1217 20:39:35.954017  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:35.954022  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:35.954078  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:35.978944  528764 cri.go:89] found id: ""
	I1217 20:39:35.978958  528764 logs.go:282] 0 containers: []
	W1217 20:39:35.978965  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:35.978971  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:35.979027  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:36.009908  528764 cri.go:89] found id: ""
	I1217 20:39:36.009923  528764 logs.go:282] 0 containers: []
	W1217 20:39:36.009932  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:36.009938  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:36.010005  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:36.036093  528764 cri.go:89] found id: ""
	I1217 20:39:36.036106  528764 logs.go:282] 0 containers: []
	W1217 20:39:36.036114  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:36.036125  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:36.036189  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:36.064858  528764 cri.go:89] found id: ""
	I1217 20:39:36.064873  528764 logs.go:282] 0 containers: []
	W1217 20:39:36.064880  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:36.064888  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:36.064943  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:36.091213  528764 cri.go:89] found id: ""
	I1217 20:39:36.091228  528764 logs.go:282] 0 containers: []
	W1217 20:39:36.091236  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:36.091243  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:36.091265  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:36.123131  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:36.123147  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:36.192190  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:36.192209  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:36.207423  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:36.207441  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:36.274672  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:36.265622   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.266359   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.267351   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.268947   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.269621   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:36.265622   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.266359   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.267351   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.268947   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.269621   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:36.274682  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:36.274693  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:38.848137  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:38.858186  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:38.858245  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:38.887476  528764 cri.go:89] found id: ""
	I1217 20:39:38.887491  528764 logs.go:282] 0 containers: []
	W1217 20:39:38.887498  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:38.887503  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:38.887559  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:38.913669  528764 cri.go:89] found id: ""
	I1217 20:39:38.913683  528764 logs.go:282] 0 containers: []
	W1217 20:39:38.913691  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:38.913696  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:38.913753  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:38.938922  528764 cri.go:89] found id: ""
	I1217 20:39:38.938937  528764 logs.go:282] 0 containers: []
	W1217 20:39:38.938945  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:38.938950  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:38.939010  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:38.964782  528764 cri.go:89] found id: ""
	I1217 20:39:38.964796  528764 logs.go:282] 0 containers: []
	W1217 20:39:38.964804  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:38.964809  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:38.964869  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:38.990990  528764 cri.go:89] found id: ""
	I1217 20:39:38.991004  528764 logs.go:282] 0 containers: []
	W1217 20:39:38.991012  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:38.991017  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:38.991087  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:39.019624  528764 cri.go:89] found id: ""
	I1217 20:39:39.019638  528764 logs.go:282] 0 containers: []
	W1217 20:39:39.019645  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:39.019651  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:39.019712  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:39.049943  528764 cri.go:89] found id: ""
	I1217 20:39:39.049957  528764 logs.go:282] 0 containers: []
	W1217 20:39:39.049964  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:39.049971  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:39.049982  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:39.114679  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:39.114699  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:39.129526  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:39.129544  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:39.192131  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:39.184273   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.185000   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.186617   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.186938   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.188434   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:39.184273   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.185000   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.186617   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.186938   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.188434   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:39.192141  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:39.192151  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:39.262829  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:39.262849  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:41.796129  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:41.805988  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:41.806050  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:41.830659  528764 cri.go:89] found id: ""
	I1217 20:39:41.830688  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.830696  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:41.830702  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:41.830772  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:41.855846  528764 cri.go:89] found id: ""
	I1217 20:39:41.855861  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.855868  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:41.855874  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:41.855937  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:41.880126  528764 cri.go:89] found id: ""
	I1217 20:39:41.880139  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.880147  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:41.880151  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:41.880205  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:41.909006  528764 cri.go:89] found id: ""
	I1217 20:39:41.909020  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.909027  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:41.909032  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:41.909088  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:41.938559  528764 cri.go:89] found id: ""
	I1217 20:39:41.938573  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.938580  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:41.938585  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:41.938646  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:41.966291  528764 cri.go:89] found id: ""
	I1217 20:39:41.966305  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.966312  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:41.966317  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:41.966380  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:41.991150  528764 cri.go:89] found id: ""
	I1217 20:39:41.991164  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.991172  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:41.991180  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:41.991190  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:42.024918  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:42.024936  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:42.094047  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:42.094069  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:42.113717  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:42.113737  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:42.191163  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:42.180141   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.180682   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.183783   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.184295   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.186218   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:42.180141   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.180682   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.183783   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.184295   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.186218   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:42.191176  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:42.191195  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:44.772767  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:44.783138  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:44.783204  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:44.811282  528764 cri.go:89] found id: ""
	I1217 20:39:44.811296  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.811304  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:44.811309  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:44.811369  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:44.838690  528764 cri.go:89] found id: ""
	I1217 20:39:44.838704  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.838711  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:44.838717  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:44.838776  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:44.866668  528764 cri.go:89] found id: ""
	I1217 20:39:44.866683  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.866690  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:44.866696  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:44.866751  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:44.892383  528764 cri.go:89] found id: ""
	I1217 20:39:44.892397  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.892405  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:44.892410  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:44.892468  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:44.921797  528764 cri.go:89] found id: ""
	I1217 20:39:44.921812  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.921819  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:44.921825  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:44.921885  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:44.947362  528764 cri.go:89] found id: ""
	I1217 20:39:44.947376  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.947384  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:44.947389  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:44.947446  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:44.974284  528764 cri.go:89] found id: ""
	I1217 20:39:44.974297  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.974305  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:44.974312  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:44.974323  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:45.077487  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:45.067021   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.068124   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.068971   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.071090   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.072380   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:45.067021   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.068124   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.068971   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.071090   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.072380   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:45.077499  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:45.077511  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:45.185472  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:45.185499  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:45.244734  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:45.244753  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:45.320383  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:45.320403  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:47.839254  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:47.849450  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:47.849509  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:47.878517  528764 cri.go:89] found id: ""
	I1217 20:39:47.878531  528764 logs.go:282] 0 containers: []
	W1217 20:39:47.878539  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:47.878554  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:47.878612  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:47.904739  528764 cri.go:89] found id: ""
	I1217 20:39:47.904754  528764 logs.go:282] 0 containers: []
	W1217 20:39:47.904762  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:47.904767  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:47.904823  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:47.929572  528764 cri.go:89] found id: ""
	I1217 20:39:47.929586  528764 logs.go:282] 0 containers: []
	W1217 20:39:47.929593  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:47.929599  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:47.929658  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:47.958617  528764 cri.go:89] found id: ""
	I1217 20:39:47.958631  528764 logs.go:282] 0 containers: []
	W1217 20:39:47.958639  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:47.958644  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:47.958701  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:47.984420  528764 cri.go:89] found id: ""
	I1217 20:39:47.984434  528764 logs.go:282] 0 containers: []
	W1217 20:39:47.984441  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:47.984447  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:47.984504  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:48.013373  528764 cri.go:89] found id: ""
	I1217 20:39:48.013389  528764 logs.go:282] 0 containers: []
	W1217 20:39:48.013396  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:48.013402  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:48.013461  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:48.040700  528764 cri.go:89] found id: ""
	I1217 20:39:48.040713  528764 logs.go:282] 0 containers: []
	W1217 20:39:48.040720  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:48.040728  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:48.040740  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:48.112503  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:48.112522  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:48.148498  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:48.148514  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:48.215575  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:48.215644  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:48.230769  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:48.230785  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:48.305622  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:48.297446   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.298244   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.299754   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.300303   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.301821   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:48.297446   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.298244   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.299754   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.300303   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.301821   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:50.807281  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:50.819012  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:50.819075  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:50.845131  528764 cri.go:89] found id: ""
	I1217 20:39:50.845145  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.845153  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:50.845158  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:50.845215  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:50.878758  528764 cri.go:89] found id: ""
	I1217 20:39:50.878771  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.878778  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:50.878783  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:50.878851  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:50.905139  528764 cri.go:89] found id: ""
	I1217 20:39:50.905154  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.905161  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:50.905167  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:50.905234  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:50.930885  528764 cri.go:89] found id: ""
	I1217 20:39:50.930898  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.930923  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:50.930928  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:50.931004  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:50.961249  528764 cri.go:89] found id: ""
	I1217 20:39:50.961264  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.961271  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:50.961281  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:50.961339  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:50.990268  528764 cri.go:89] found id: ""
	I1217 20:39:50.990283  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.990290  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:50.990305  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:50.990368  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:51.022220  528764 cri.go:89] found id: ""
	I1217 20:39:51.022235  528764 logs.go:282] 0 containers: []
	W1217 20:39:51.022253  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:51.022260  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:51.022272  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:51.037279  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:51.037301  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:51.104091  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:51.095357   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.096330   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.098113   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.098463   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.100108   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:51.095357   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.096330   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.098113   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.098463   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.100108   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:51.104101  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:51.104112  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:51.170651  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:51.170674  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:51.200399  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:51.200421  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:53.770767  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:53.780793  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:53.780851  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:53.809348  528764 cri.go:89] found id: ""
	I1217 20:39:53.809362  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.809370  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:53.809375  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:53.809441  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:53.834689  528764 cri.go:89] found id: ""
	I1217 20:39:53.834703  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.834710  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:53.834716  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:53.834772  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:53.861465  528764 cri.go:89] found id: ""
	I1217 20:39:53.861483  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.861491  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:53.861498  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:53.861562  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:53.891732  528764 cri.go:89] found id: ""
	I1217 20:39:53.891747  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.891754  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:53.891759  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:53.891817  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:53.917938  528764 cri.go:89] found id: ""
	I1217 20:39:53.917952  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.917959  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:53.917964  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:53.918024  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:53.943397  528764 cri.go:89] found id: ""
	I1217 20:39:53.943412  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.943420  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:53.943431  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:53.943500  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:53.970499  528764 cri.go:89] found id: ""
	I1217 20:39:53.970514  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.970521  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:53.970529  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:53.970540  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:54.037615  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:54.028803   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.030066   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.030771   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.032113   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.032669   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:54.028803   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.030066   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.030771   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.032113   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.032669   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:54.037625  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:54.037637  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:54.105683  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:54.105702  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:54.135408  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:54.135424  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:54.201915  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:54.201934  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:56.717571  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:56.727576  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:56.727663  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:56.752566  528764 cri.go:89] found id: ""
	I1217 20:39:56.752580  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.752587  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:56.752593  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:56.752649  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:56.778100  528764 cri.go:89] found id: ""
	I1217 20:39:56.778114  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.778123  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:56.778128  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:56.778188  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:56.810564  528764 cri.go:89] found id: ""
	I1217 20:39:56.810578  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.810585  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:56.810590  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:56.810651  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:56.836110  528764 cri.go:89] found id: ""
	I1217 20:39:56.836123  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.836130  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:56.836136  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:56.836192  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:56.860819  528764 cri.go:89] found id: ""
	I1217 20:39:56.860833  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.860840  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:56.860845  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:56.860910  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:56.885378  528764 cri.go:89] found id: ""
	I1217 20:39:56.885392  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.885400  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:56.885405  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:56.885464  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:56.910636  528764 cri.go:89] found id: ""
	I1217 20:39:56.910649  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.910657  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:56.910664  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:56.910685  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:56.975973  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:56.975994  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:56.990897  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:56.990913  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:57.059420  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:57.050613   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.051527   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.053283   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.053833   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.055383   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:57.050613   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.051527   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.053283   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.053833   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.055383   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:57.059434  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:57.059444  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:57.127559  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:57.127588  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:59.660834  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:59.671347  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:59.671409  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:59.697317  528764 cri.go:89] found id: ""
	I1217 20:39:59.697331  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.697338  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:59.697344  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:59.697400  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:59.721571  528764 cri.go:89] found id: ""
	I1217 20:39:59.721586  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.721593  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:59.721601  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:59.721663  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:59.746819  528764 cri.go:89] found id: ""
	I1217 20:39:59.746835  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.746843  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:59.746849  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:59.746909  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:59.773034  528764 cri.go:89] found id: ""
	I1217 20:39:59.773049  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.773057  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:59.773062  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:59.773123  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:59.802418  528764 cri.go:89] found id: ""
	I1217 20:39:59.802441  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.802449  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:59.802454  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:59.802524  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:59.831711  528764 cri.go:89] found id: ""
	I1217 20:39:59.831725  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.831733  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:59.831739  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:59.831804  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:59.856953  528764 cri.go:89] found id: ""
	I1217 20:39:59.856967  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.856975  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:59.856982  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:59.856995  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:59.884897  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:59.884914  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:59.949655  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:59.949677  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:59.964501  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:59.964517  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:40:00.094107  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:40:00.057057   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.058079   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.081049   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.085379   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.086028   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:40:00.057057   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.058079   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.081049   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.085379   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.086028   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:40:00.094120  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:40:00.094132  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:40:02.787739  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:40:02.797830  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:40:02.797894  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:40:02.834082  528764 cri.go:89] found id: ""
	I1217 20:40:02.834096  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.834104  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:40:02.834109  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:40:02.834168  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:40:02.866743  528764 cri.go:89] found id: ""
	I1217 20:40:02.866756  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.866763  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:40:02.866768  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:40:02.866837  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:40:02.895045  528764 cri.go:89] found id: ""
	I1217 20:40:02.895058  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.895066  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:40:02.895071  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:40:02.895126  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:40:02.921557  528764 cri.go:89] found id: ""
	I1217 20:40:02.921570  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.921580  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:40:02.921585  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:40:02.921641  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:40:02.952647  528764 cri.go:89] found id: ""
	I1217 20:40:02.952661  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.952669  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:40:02.952675  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:40:02.952733  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:40:02.983298  528764 cri.go:89] found id: ""
	I1217 20:40:02.983312  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.983319  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:40:02.983325  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:40:02.983389  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:40:03.010550  528764 cri.go:89] found id: ""
	I1217 20:40:03.010565  528764 logs.go:282] 0 containers: []
	W1217 20:40:03.010573  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:40:03.010581  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:40:03.010592  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:40:03.079310  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:40:03.079329  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:40:03.094479  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:40:03.094497  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:40:03.161221  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:40:03.151895   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.152622   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.154348   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.155044   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.156824   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:40:03.151895   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.152622   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.154348   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.155044   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.156824   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:40:03.161231  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:40:03.161242  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:40:03.227816  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:40:03.227835  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:40:05.757487  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:40:05.767711  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:40:05.767773  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:40:05.793946  528764 cri.go:89] found id: ""
	I1217 20:40:05.793960  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.793972  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:40:05.793978  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:40:05.794036  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:40:05.822285  528764 cri.go:89] found id: ""
	I1217 20:40:05.822299  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.822306  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:40:05.822314  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:40:05.822371  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:40:05.850250  528764 cri.go:89] found id: ""
	I1217 20:40:05.850264  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.850271  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:40:05.850277  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:40:05.850335  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:40:05.895396  528764 cri.go:89] found id: ""
	I1217 20:40:05.895410  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.895417  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:40:05.895422  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:40:05.895477  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:40:05.922557  528764 cri.go:89] found id: ""
	I1217 20:40:05.922571  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.922580  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:40:05.922586  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:40:05.922644  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:40:05.948573  528764 cri.go:89] found id: ""
	I1217 20:40:05.948586  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.948594  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:40:05.948599  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:40:05.948655  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:40:05.975477  528764 cri.go:89] found id: ""
	I1217 20:40:05.975492  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.975499  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:40:05.975507  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:40:05.975518  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:40:06.041819  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:40:06.041840  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:40:06.056861  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:40:06.056877  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:40:06.121776  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:40:06.113002   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.113953   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.115560   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.115989   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.117538   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:40:06.113002   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.113953   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.115560   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.115989   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.117538   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:40:06.121787  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:40:06.121799  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:40:06.189149  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:40:06.189168  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:40:08.726723  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:40:08.736543  528764 kubeadm.go:602] duration metric: took 4m2.922502769s to restartPrimaryControlPlane
	W1217 20:40:08.736595  528764 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 20:40:08.736673  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 20:40:09.144455  528764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:40:09.157270  528764 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:40:09.165045  528764 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:40:09.165097  528764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:40:09.172944  528764 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:40:09.172955  528764 kubeadm.go:158] found existing configuration files:
	
	I1217 20:40:09.173008  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 20:40:09.180768  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:40:09.180823  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:40:09.188593  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 20:40:09.196627  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:40:09.196696  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:40:09.204027  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 20:40:09.211590  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:40:09.211645  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:40:09.219300  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 20:40:09.227194  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:40:09.227262  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:40:09.234747  528764 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:40:09.272070  528764 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 20:40:09.272212  528764 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:40:09.341132  528764 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:40:09.341223  528764 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 20:40:09.341264  528764 kubeadm.go:319] OS: Linux
	I1217 20:40:09.341317  528764 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:40:09.341383  528764 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 20:40:09.341441  528764 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:40:09.341494  528764 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:40:09.341544  528764 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:40:09.341595  528764 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:40:09.341642  528764 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:40:09.341697  528764 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:40:09.341746  528764 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 20:40:09.410099  528764 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:40:09.410202  528764 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:40:09.410291  528764 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:40:09.420776  528764 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:40:09.424281  528764 out.go:252]   - Generating certificates and keys ...
	I1217 20:40:09.424384  528764 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:40:09.424470  528764 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:40:09.424574  528764 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 20:40:09.424647  528764 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 20:40:09.424730  528764 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 20:40:09.424800  528764 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 20:40:09.424875  528764 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 20:40:09.424947  528764 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 20:40:09.425042  528764 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 20:40:09.425124  528764 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 20:40:09.425164  528764 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 20:40:09.425224  528764 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:40:09.510914  528764 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:40:09.769116  528764 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:40:10.300117  528764 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:40:10.525653  528764 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:40:10.613609  528764 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:40:10.614221  528764 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:40:10.616799  528764 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 20:40:10.619993  528764 out.go:252]   - Booting up control plane ...
	I1217 20:40:10.620096  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:40:10.620217  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:40:10.620290  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:40:10.635322  528764 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:40:10.635439  528764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:40:10.644820  528764 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:40:10.645930  528764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:40:10.645984  528764 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:40:10.779996  528764 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:40:10.780110  528764 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:44:10.781176  528764 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001248714s
	I1217 20:44:10.781203  528764 kubeadm.go:319] 
	I1217 20:44:10.781260  528764 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 20:44:10.781303  528764 kubeadm.go:319] 	- The kubelet is not running
	I1217 20:44:10.781406  528764 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 20:44:10.781411  528764 kubeadm.go:319] 
	I1217 20:44:10.781555  528764 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 20:44:10.781602  528764 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 20:44:10.781633  528764 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 20:44:10.781637  528764 kubeadm.go:319] 
	I1217 20:44:10.786300  528764 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 20:44:10.786712  528764 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 20:44:10.786818  528764 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 20:44:10.787052  528764 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 20:44:10.787056  528764 kubeadm.go:319] 
	I1217 20:44:10.787124  528764 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 20:44:10.787237  528764 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001248714s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 20:44:10.787339  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 20:44:11.201167  528764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:44:11.214381  528764 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:44:11.214439  528764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:44:11.222598  528764 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:44:11.222610  528764 kubeadm.go:158] found existing configuration files:
	
	I1217 20:44:11.222661  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 20:44:11.230419  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:44:11.230478  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:44:11.238159  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 20:44:11.246406  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:44:11.246462  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:44:11.254307  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 20:44:11.262104  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:44:11.262159  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:44:11.270202  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 20:44:11.278439  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:44:11.278497  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:44:11.286143  528764 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:44:11.330597  528764 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 20:44:11.330648  528764 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:44:11.407432  528764 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:44:11.407494  528764 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 20:44:11.407526  528764 kubeadm.go:319] OS: Linux
	I1217 20:44:11.407568  528764 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:44:11.407631  528764 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 20:44:11.407675  528764 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:44:11.407720  528764 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:44:11.407764  528764 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:44:11.407809  528764 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:44:11.407851  528764 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:44:11.407896  528764 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:44:11.407938  528764 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 20:44:11.479750  528764 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:44:11.479854  528764 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:44:11.479945  528764 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:44:11.492072  528764 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:44:11.494989  528764 out.go:252]   - Generating certificates and keys ...
	I1217 20:44:11.495078  528764 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:44:11.495152  528764 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:44:11.495231  528764 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 20:44:11.495312  528764 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 20:44:11.495394  528764 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 20:44:11.495452  528764 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 20:44:11.495526  528764 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 20:44:11.495616  528764 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 20:44:11.495700  528764 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 20:44:11.495778  528764 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 20:44:11.495818  528764 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 20:44:11.495877  528764 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:44:11.718879  528764 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:44:11.913718  528764 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:44:12.104953  528764 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:44:12.214740  528764 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:44:13.078100  528764 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:44:13.078681  528764 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:44:13.081470  528764 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 20:44:13.086841  528764 out.go:252]   - Booting up control plane ...
	I1217 20:44:13.086964  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:44:13.087047  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:44:13.087115  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:44:13.101223  528764 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:44:13.101325  528764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:44:13.108618  528764 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:44:13.108874  528764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:44:13.109039  528764 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:44:13.243147  528764 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:44:13.243267  528764 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:48:13.243345  528764 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000238438s
	I1217 20:48:13.243376  528764 kubeadm.go:319] 
	I1217 20:48:13.243430  528764 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 20:48:13.243460  528764 kubeadm.go:319] 	- The kubelet is not running
	I1217 20:48:13.243558  528764 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 20:48:13.243562  528764 kubeadm.go:319] 
	I1217 20:48:13.243678  528764 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 20:48:13.243708  528764 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 20:48:13.243736  528764 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 20:48:13.243739  528764 kubeadm.go:319] 
	I1217 20:48:13.247539  528764 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 20:48:13.247985  528764 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 20:48:13.248095  528764 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 20:48:13.248338  528764 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 20:48:13.248343  528764 kubeadm.go:319] 
	I1217 20:48:13.248416  528764 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 20:48:13.248469  528764 kubeadm.go:403] duration metric: took 12m7.468824114s to StartCluster
	I1217 20:48:13.248499  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:48:13.248560  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:48:13.273652  528764 cri.go:89] found id: ""
	I1217 20:48:13.273665  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.273672  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:48:13.273677  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:48:13.273743  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:48:13.299758  528764 cri.go:89] found id: ""
	I1217 20:48:13.299773  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.299780  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:48:13.299787  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:48:13.299849  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:48:13.331514  528764 cri.go:89] found id: ""
	I1217 20:48:13.331527  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.331534  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:48:13.331538  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:48:13.331632  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:48:13.361494  528764 cri.go:89] found id: ""
	I1217 20:48:13.361508  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.361515  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:48:13.361520  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:48:13.361583  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:48:13.392361  528764 cri.go:89] found id: ""
	I1217 20:48:13.392374  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.392382  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:48:13.392387  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:48:13.392445  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:48:13.420567  528764 cri.go:89] found id: ""
	I1217 20:48:13.420581  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.420589  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:48:13.420594  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:48:13.420652  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:48:13.446072  528764 cri.go:89] found id: ""
	I1217 20:48:13.446086  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.446093  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:48:13.446102  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:48:13.446112  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:48:13.512293  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:48:13.512314  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:48:13.527934  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:48:13.527951  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:48:13.596728  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:48:13.587277   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.587931   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.589795   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.590404   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.592261   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:48:13.587277   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.587931   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.589795   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.590404   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.592261   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:48:13.596751  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:48:13.596762  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:48:13.666834  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:48:13.666852  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 20:48:13.697763  528764 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238438s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 20:48:13.697796  528764 out.go:285] * 
	W1217 20:48:13.697859  528764 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238438s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 20:48:13.697876  528764 out.go:285] * 
	W1217 20:48:13.700016  528764 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:48:13.704929  528764 out.go:203] 
	W1217 20:48:13.708733  528764 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238438s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 20:48:13.708785  528764 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 20:48:13.708804  528764 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 20:48:13.713576  528764 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496553819Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496588913Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496641484Z" level=info msg="Create NRI interface"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496756307Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496765161Z" level=info msg="runtime interface created"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496787586Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496795537Z" level=info msg="runtime interface starting up..."
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496804792Z" level=info msg="starting plugins..."
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496818503Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496896764Z" level=info msg="No systemd watchdog enabled"
	Dec 17 20:36:04 functional-655452 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.415834383Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=58f6f0f1-488b-4240-a679-3e157f00d7e7 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.416590837Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=05b425cc-49a9-416d-8e00-62945047df36 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.417323538Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=a9a38e6d-b290-413f-a93f-cf194783972f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.417962945Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=bdf79a37-e5ac-441d-baa9-990efb2af86f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.418404377Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=f29ade00-2b87-48af-a8d1-af1f70d12fc1 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.418943992Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=aa01ccac-5dc1-42c2-9b96-b5307aedf908 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.419435131Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=3071c5cb-d2e8-40e4-bf26-10cfdb83c6ca name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.483168755Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=116885b2-e96e-48a5-8c7d-749c0bd3c872 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.484179432Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=a7b99d88-fbbf-4485-ad77-1f09bb11e283 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.484714555Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=1a3a48a9-47e1-4681-9a10-70d7c5e85de2 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.48529777Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=48ecbe50-05dc-4736-8a4c-23a7b8f0b752 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.485817657Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=13bf3d26-ab2e-4773-bb7e-3fc288ba3714 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.486350122Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=3ebf0c9f-0c46-4d67-8924-03dd39ad4399 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.486847969Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=e3deb8c8-e04b-4949-9c80-5a8e5a9b5bee name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:48:14.919685   21322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:14.920409   21322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:14.922180   21322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:14.922657   21322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:14.923892   21322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 19:02] hrtimer: interrupt took 71042327 ns
	[Dec17 20:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 20:12] overlayfs: idmapped layers are currently not supported
	[  +0.080288] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec17 20:17] overlayfs: idmapped layers are currently not supported
	[Dec17 20:18] overlayfs: idmapped layers are currently not supported
	[Dec17 20:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:48:14 up  3:30,  0 user,  load average: 0.21, 0.22, 0.54
	Linux functional-655452 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 20:48:12 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:48:13 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 961.
	Dec 17 20:48:13 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:48:13 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:48:13 functional-655452 kubelet[21153]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:48:13 functional-655452 kubelet[21153]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:48:13 functional-655452 kubelet[21153]: E1217 20:48:13.387067   21153 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:48:13 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:48:13 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:48:14 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 17 20:48:14 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:48:14 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:48:14 functional-655452 kubelet[21233]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:48:14 functional-655452 kubelet[21233]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:48:14 functional-655452 kubelet[21233]: E1217 20:48:14.144455   21233 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:48:14 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:48:14 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:48:14 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 17 20:48:14 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:48:14 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:48:14 functional-655452 kubelet[21312]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:48:14 functional-655452 kubelet[21312]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:48:14 functional-655452 kubelet[21312]: E1217 20:48:14.889503   21312 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:48:14 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:48:14 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452: exit status 2 (383.331204ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-655452" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (734.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (2.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-655452 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-655452 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (61.559382ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-655452 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-655452
helpers_test.go:244: (dbg) docker inspect functional-655452:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	        "Created": "2025-12-17T20:21:23.127111799Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 517339,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:21:23.205943598Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hosts",
	        "LogPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f-json.log",
	        "Name": "/functional-655452",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-655452:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-655452",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	                "LowerDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8-init/diff:/var/lib/docker/overlay2/c4759b344ecb109c83d66e4ef56c76903f1d1e597efb86ab2d2c7911b1130a8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-655452",
	                "Source": "/var/lib/docker/volumes/functional-655452/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-655452",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-655452",
	                "name.minikube.sigs.k8s.io": "functional-655452",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "77ae7dbcf69b3201be4ce65a88d235dbad078142318e01a5939425f9766fb924",
	            "SandboxKey": "/var/run/docker/netns/77ae7dbcf69b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-655452": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:c6:98:b5:58:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b68cd6e69ccdb81b0870dd976e2d5401ca696480842d2c469293121d59435cfb",
	                    "EndpointID": "82926bb4b08b5f66131c1026a8bc72a582e9cb8c6d97a278a494965923f47cb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-655452",
	                        "ce7a6a54d14f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452: exit status 2 (325.510945ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-643319 image ls --format yaml --alsologtostderr                                                                                      │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ ssh     │ functional-643319 ssh pgrep buildkitd                                                                                                           │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ image   │ functional-643319 image build -t localhost/my-image:functional-643319 testdata/build --alsologtostderr                                          │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image   │ functional-643319 image ls --format json --alsologtostderr                                                                                      │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image   │ functional-643319 image ls --format table --alsologtostderr                                                                                     │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ image   │ functional-643319 image ls                                                                                                                      │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ delete  │ -p functional-643319                                                                                                                            │ functional-643319 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │ 17 Dec 25 20:21 UTC │
	│ start   │ -p functional-655452 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:21 UTC │                     │
	│ start   │ -p functional-655452 --alsologtostderr -v=8                                                                                                     │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:29 UTC │                     │
	│ cache   │ functional-655452 cache add registry.k8s.io/pause:3.1                                                                                           │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ functional-655452 cache add registry.k8s.io/pause:3.3                                                                                           │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ functional-655452 cache add registry.k8s.io/pause:latest                                                                                        │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ functional-655452 cache add minikube-local-cache-test:functional-655452                                                                         │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ functional-655452 cache delete minikube-local-cache-test:functional-655452                                                                      │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ list                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ ssh     │ functional-655452 ssh sudo crictl images                                                                                                        │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ ssh     │ functional-655452 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                              │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ ssh     │ functional-655452 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │                     │
	│ cache   │ functional-655452 cache reload                                                                                                                  │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ ssh     │ functional-655452 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ kubectl │ functional-655452 kubectl -- --context functional-655452 get pods                                                                               │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │                     │
	│ start   │ -p functional-655452 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                        │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:36 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:36:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:36:01.304180  528764 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:36:01.304299  528764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:36:01.304303  528764 out.go:374] Setting ErrFile to fd 2...
	I1217 20:36:01.304307  528764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:36:01.304548  528764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:36:01.304941  528764 out.go:368] Setting JSON to false
	I1217 20:36:01.305793  528764 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11911,"bootTime":1765991851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:36:01.305860  528764 start.go:143] virtualization:  
	I1217 20:36:01.309940  528764 out.go:179] * [functional-655452] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:36:01.313178  528764 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:36:01.313261  528764 notify.go:221] Checking for updates...
	I1217 20:36:01.319276  528764 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:36:01.322533  528764 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:36:01.325481  528764 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:36:01.328332  528764 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:36:01.331257  528764 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:36:01.334638  528764 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:36:01.334735  528764 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:36:01.377324  528764 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:36:01.377436  528764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:36:01.442821  528764 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 20:36:01.432767342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:36:01.442911  528764 docker.go:319] overlay module found
	I1217 20:36:01.446093  528764 out.go:179] * Using the docker driver based on existing profile
	I1217 20:36:01.448835  528764 start.go:309] selected driver: docker
	I1217 20:36:01.448847  528764 start.go:927] validating driver "docker" against &{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:36:01.448948  528764 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:36:01.449055  528764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:36:01.502893  528764 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 20:36:01.493096577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:36:01.503296  528764 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:36:01.503325  528764 cni.go:84] Creating CNI manager for ""
	I1217 20:36:01.503373  528764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:36:01.503423  528764 start.go:353] cluster config:
	{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:36:01.506646  528764 out.go:179] * Starting "functional-655452" primary control-plane node in "functional-655452" cluster
	I1217 20:36:01.509580  528764 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:36:01.512594  528764 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:36:01.515481  528764 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:36:01.515521  528764 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1217 20:36:01.515533  528764 cache.go:65] Caching tarball of preloaded images
	I1217 20:36:01.515555  528764 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:36:01.515635  528764 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:36:01.515645  528764 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 20:36:01.515757  528764 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/config.json ...
	I1217 20:36:01.536964  528764 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:36:01.536994  528764 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:36:01.537012  528764 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:36:01.537046  528764 start.go:360] acquireMachinesLock for functional-655452: {Name:mk8b480106227149e1b79da3c4f580b9dc5e0f96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:36:01.537100  528764 start.go:364] duration metric: took 37.99µs to acquireMachinesLock for "functional-655452"
	I1217 20:36:01.537118  528764 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:36:01.537122  528764 fix.go:54] fixHost starting: 
	I1217 20:36:01.537383  528764 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:36:01.554557  528764 fix.go:112] recreateIfNeeded on functional-655452: state=Running err=<nil>
	W1217 20:36:01.554578  528764 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:36:01.557934  528764 out.go:252] * Updating the running docker "functional-655452" container ...
	I1217 20:36:01.557966  528764 machine.go:94] provisionDockerMachine start ...
	I1217 20:36:01.558073  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:01.576191  528764 main.go:143] libmachine: Using SSH client type: native
	I1217 20:36:01.576509  528764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:36:01.576515  528764 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:36:01.707478  528764 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-655452
	
	I1217 20:36:01.707493  528764 ubuntu.go:182] provisioning hostname "functional-655452"
	I1217 20:36:01.707564  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:01.725762  528764 main.go:143] libmachine: Using SSH client type: native
	I1217 20:36:01.726063  528764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:36:01.726071  528764 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-655452 && echo "functional-655452" | sudo tee /etc/hostname
	I1217 20:36:01.865176  528764 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-655452
	
	I1217 20:36:01.865255  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:01.884852  528764 main.go:143] libmachine: Using SSH client type: native
	I1217 20:36:01.885159  528764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:36:01.885174  528764 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-655452' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-655452/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-655452' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:36:02.016339  528764 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:36:02.016355  528764 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:36:02.016378  528764 ubuntu.go:190] setting up certificates
	I1217 20:36:02.016388  528764 provision.go:84] configureAuth start
	I1217 20:36:02.016451  528764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-655452
	I1217 20:36:02.035106  528764 provision.go:143] copyHostCerts
	I1217 20:36:02.035175  528764 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:36:02.035183  528764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:36:02.035257  528764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:36:02.035375  528764 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:36:02.035379  528764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:36:02.035406  528764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:36:02.035470  528764 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:36:02.035473  528764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:36:02.035496  528764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:36:02.035545  528764 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.functional-655452 san=[127.0.0.1 192.168.49.2 functional-655452 localhost minikube]
	I1217 20:36:02.115164  528764 provision.go:177] copyRemoteCerts
	I1217 20:36:02.115221  528764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:36:02.115260  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:02.139076  528764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:36:02.235601  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:36:02.254294  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 20:36:02.272604  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 20:36:02.290727  528764 provision.go:87] duration metric: took 274.326255ms to configureAuth
	I1217 20:36:02.290752  528764 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:36:02.291001  528764 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:36:02.291105  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:02.309578  528764 main.go:143] libmachine: Using SSH client type: native
	I1217 20:36:02.309891  528764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:36:02.309902  528764 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:36:02.644802  528764 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:36:02.644817  528764 machine.go:97] duration metric: took 1.086843683s to provisionDockerMachine
	I1217 20:36:02.644827  528764 start.go:293] postStartSetup for "functional-655452" (driver="docker")
	I1217 20:36:02.644838  528764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:36:02.644899  528764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:36:02.644944  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:02.663334  528764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:36:02.759464  528764 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:36:02.762934  528764 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:36:02.762952  528764 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:36:02.762970  528764 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:36:02.763029  528764 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:36:02.763103  528764 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:36:02.763175  528764 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts -> hosts in /etc/test/nested/copy/488412
	I1217 20:36:02.763216  528764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/488412
	I1217 20:36:02.770652  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:36:02.788458  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts --> /etc/test/nested/copy/488412/hosts (40 bytes)
	I1217 20:36:02.805971  528764 start.go:296] duration metric: took 161.129975ms for postStartSetup
	I1217 20:36:02.806055  528764 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:36:02.806105  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:02.832327  528764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:36:02.932517  528764 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:36:02.937022  528764 fix.go:56] duration metric: took 1.399892436s for fixHost
	I1217 20:36:02.937037  528764 start.go:83] releasing machines lock for "functional-655452", held for 1.399929845s
	I1217 20:36:02.937101  528764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-655452
	I1217 20:36:02.954767  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:36:02.954820  528764 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:36:02.954828  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:36:02.954855  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:36:02.954880  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:36:02.954903  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:36:02.954966  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:36:02.955032  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:36:02.955078  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:02.972629  528764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:36:03.082963  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:36:03.101544  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:36:03.119807  528764 ssh_runner.go:195] Run: openssl version
	I1217 20:36:03.126345  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:36:03.134006  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:36:03.141755  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:36:03.145627  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:36:03.145694  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:36:03.186918  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:36:03.196074  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:36:03.205007  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:36:03.212820  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:36:03.216798  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:36:03.216865  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:36:03.260241  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:36:03.268200  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:03.275663  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:36:03.283259  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:03.287077  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:03.287187  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:03.328526  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:36:03.336152  528764 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:36:03.339768  528764 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 20:36:03.343092  528764 ssh_runner.go:195] Run: cat /version.json
	I1217 20:36:03.343166  528764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:36:03.444762  528764 ssh_runner.go:195] Run: systemctl --version
	I1217 20:36:03.450992  528764 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:36:03.489251  528764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:36:03.493525  528764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:36:03.493594  528764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:36:03.501380  528764 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:36:03.501400  528764 start.go:496] detecting cgroup driver to use...
	I1217 20:36:03.501430  528764 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:36:03.501474  528764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:36:03.519927  528764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:36:03.535865  528764 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:36:03.535924  528764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:36:03.553665  528764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:36:03.568077  528764 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:36:03.688788  528764 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:36:03.816391  528764 docker.go:234] disabling docker service ...
	I1217 20:36:03.816445  528764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:36:03.832743  528764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:36:03.846562  528764 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:36:03.965969  528764 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:36:04.109607  528764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:36:04.122680  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:36:04.137683  528764 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:36:04.137752  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.147364  528764 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:36:04.147423  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.157452  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.166810  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.176014  528764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:36:04.184171  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.192938  528764 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.201542  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.210110  528764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:36:04.217743  528764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:36:04.225321  528764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:36:04.332263  528764 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:36:04.503245  528764 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:36:04.503305  528764 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:36:04.508393  528764 start.go:564] Will wait 60s for crictl version
	I1217 20:36:04.508461  528764 ssh_runner.go:195] Run: which crictl
	I1217 20:36:04.512401  528764 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:36:04.541968  528764 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:36:04.542059  528764 ssh_runner.go:195] Run: crio --version
	I1217 20:36:04.568941  528764 ssh_runner.go:195] Run: crio --version
	I1217 20:36:04.602248  528764 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 20:36:04.604894  528764 cli_runner.go:164] Run: docker network inspect functional-655452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:36:04.620832  528764 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:36:04.627460  528764 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 20:36:04.630066  528764 kubeadm.go:884] updating cluster {Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:36:04.630187  528764 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:36:04.630246  528764 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:36:04.668067  528764 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:36:04.668079  528764 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:36:04.668136  528764 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:36:04.698017  528764 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:36:04.698030  528764 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:36:04.698036  528764 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1217 20:36:04.698140  528764 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-655452 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:36:04.698216  528764 ssh_runner.go:195] Run: crio config
	I1217 20:36:04.769162  528764 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 20:36:04.769193  528764 cni.go:84] Creating CNI manager for ""
	I1217 20:36:04.769200  528764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:36:04.769208  528764 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:36:04.769233  528764 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-655452 NodeName:functional-655452 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:36:04.769373  528764 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-655452"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:36:04.769444  528764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 20:36:04.777167  528764 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:36:04.777239  528764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:36:04.784566  528764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 20:36:04.797984  528764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 20:36:04.810563  528764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I1217 20:36:04.823513  528764 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:36:04.827291  528764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:36:04.950251  528764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:36:05.072220  528764 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452 for IP: 192.168.49.2
	I1217 20:36:05.072231  528764 certs.go:195] generating shared ca certs ...
	I1217 20:36:05.072245  528764 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:36:05.072401  528764 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:36:05.072442  528764 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:36:05.072448  528764 certs.go:257] generating profile certs ...
	I1217 20:36:05.072540  528764 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key
	I1217 20:36:05.072591  528764 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key.aa95dda5
	I1217 20:36:05.072629  528764 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key
	I1217 20:36:05.072739  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:36:05.072768  528764 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:36:05.072780  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:36:05.072805  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:36:05.072827  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:36:05.072848  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:36:05.072891  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:36:05.073535  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:36:05.100676  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:36:05.124485  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:36:05.145313  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:36:05.166267  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 20:36:05.185043  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 20:36:05.202568  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:36:05.220530  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:36:05.238845  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:36:05.257230  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:36:05.275490  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:36:05.293936  528764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:36:05.307062  528764 ssh_runner.go:195] Run: openssl version
	I1217 20:36:05.314048  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:36:05.321882  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:36:05.329752  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:36:05.333743  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:36:05.333820  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:36:05.375575  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:36:05.383326  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:36:05.390831  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:36:05.398670  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:36:05.402451  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:36:05.402506  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:36:05.445761  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:36:05.453165  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:05.460611  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:36:05.468452  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:05.472228  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:05.472283  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:05.513950  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:36:05.521563  528764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:36:05.525764  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:36:05.567120  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:36:05.608840  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:36:05.649788  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:36:05.692741  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:36:05.738724  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:36:05.779654  528764 kubeadm.go:401] StartCluster: {Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:36:05.779744  528764 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:36:05.779806  528764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:36:05.806396  528764 cri.go:89] found id: ""
	I1217 20:36:05.806453  528764 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:36:05.814019  528764 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 20:36:05.814027  528764 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 20:36:05.814076  528764 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 20:36:05.823754  528764 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:36:05.824259  528764 kubeconfig.go:125] found "functional-655452" server: "https://192.168.49.2:8441"
	I1217 20:36:05.825529  528764 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 20:36:05.834629  528764 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 20:21:29.177912325 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 20:36:04.817890668 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 20:36:05.834639  528764 kubeadm.go:1161] stopping kube-system containers ...
	I1217 20:36:05.834650  528764 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1217 20:36:05.834705  528764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:36:05.867919  528764 cri.go:89] found id: ""
	I1217 20:36:05.867989  528764 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 20:36:05.885438  528764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:36:05.893366  528764 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 17 20:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 17 20:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 17 20:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 17 20:25 /etc/kubernetes/scheduler.conf
	
	I1217 20:36:05.893420  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 20:36:05.901137  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 20:36:05.909490  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:36:05.909550  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:36:05.916910  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 20:36:05.924811  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:36:05.924869  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:36:05.932331  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 20:36:05.940039  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:36:05.940108  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:36:05.947225  528764 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:36:05.955062  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:36:06.001485  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:36:07.569758  528764 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.568246795s)
	I1217 20:36:07.569817  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:36:07.780039  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:36:07.827231  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:36:07.887398  528764 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:36:07.887476  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:08.388398  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:08.888310  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:09.388248  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:09.887698  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:10.387671  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:10.887697  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:11.387734  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:11.888366  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:12.388180  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:12.888379  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:13.387943  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:13.887667  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:14.388477  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:14.888341  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:15.388247  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:15.888425  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:16.388580  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:16.888356  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:17.387968  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:17.888549  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:18.388370  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:18.887715  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:19.387565  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:19.887775  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:20.388470  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:20.888348  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:21.388333  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:21.888012  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:22.387716  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:22.887746  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:23.388395  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:23.887695  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:24.387756  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:24.887696  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:25.388493  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:25.888451  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:26.387822  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:26.888379  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:27.388361  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:27.888017  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:28.388584  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:28.887763  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:29.388547  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:29.887757  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:30.387781  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:30.888609  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:31.387635  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:31.888171  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:32.388412  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:32.888528  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:33.387792  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:33.888580  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:34.388192  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:34.888392  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:35.388250  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:35.888600  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:36.388467  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:36.887895  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:37.387730  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:37.888542  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:38.388614  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:38.888493  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:39.387705  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:39.887637  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:40.388516  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:40.887751  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:41.387675  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:41.888681  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:42.387731  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:42.887637  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:43.388408  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:43.888201  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:44.387929  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:44.888382  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:45.387742  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:45.887563  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:46.388569  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:46.888449  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:47.388453  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:47.888066  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:48.387738  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:48.888486  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:49.388004  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:49.887783  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:50.388587  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:50.887797  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:51.388583  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:51.888281  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:52.387751  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:52.888303  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:53.388442  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:53.887964  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:54.387766  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:54.887669  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:55.388318  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:55.888676  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:56.387669  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:56.888505  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:57.387758  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:57.888403  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:58.388534  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:58.887712  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:59.388454  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:59.888308  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:00.387737  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:00.887766  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:01.387557  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:01.888179  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:02.387975  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:02.887807  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:03.387768  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:03.887658  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:04.387571  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:04.887653  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:05.388569  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:05.887566  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:06.387577  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:06.887577  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:07.388433  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:07.887764  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:07.887843  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:07.914157  528764 cri.go:89] found id: ""
	I1217 20:37:07.914172  528764 logs.go:282] 0 containers: []
	W1217 20:37:07.914179  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:07.914184  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:07.914241  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:07.939801  528764 cri.go:89] found id: ""
	I1217 20:37:07.939815  528764 logs.go:282] 0 containers: []
	W1217 20:37:07.939823  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:07.939828  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:07.939892  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:07.966197  528764 cri.go:89] found id: ""
	I1217 20:37:07.966213  528764 logs.go:282] 0 containers: []
	W1217 20:37:07.966221  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:07.966226  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:07.966284  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:07.997124  528764 cri.go:89] found id: ""
	I1217 20:37:07.997138  528764 logs.go:282] 0 containers: []
	W1217 20:37:07.997145  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:07.997150  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:07.997211  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:08.028280  528764 cri.go:89] found id: ""
	I1217 20:37:08.028295  528764 logs.go:282] 0 containers: []
	W1217 20:37:08.028302  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:08.028308  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:08.028368  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:08.058094  528764 cri.go:89] found id: ""
	I1217 20:37:08.058109  528764 logs.go:282] 0 containers: []
	W1217 20:37:08.058116  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:08.058121  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:08.058185  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:08.085720  528764 cri.go:89] found id: ""
	I1217 20:37:08.085736  528764 logs.go:282] 0 containers: []
	W1217 20:37:08.085744  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:08.085752  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:08.085763  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:08.150624  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:08.142553   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.142992   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.144649   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.145220   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.146782   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:08.142553   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.142992   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.144649   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.145220   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.146782   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:08.150636  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:08.150647  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:08.217929  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:08.217949  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:08.250550  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:08.250567  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:08.318542  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:08.318562  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:10.835004  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:10.846829  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:10.846892  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:10.877739  528764 cri.go:89] found id: ""
	I1217 20:37:10.877756  528764 logs.go:282] 0 containers: []
	W1217 20:37:10.877762  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:10.877768  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:10.877829  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:10.903713  528764 cri.go:89] found id: ""
	I1217 20:37:10.903727  528764 logs.go:282] 0 containers: []
	W1217 20:37:10.903735  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:10.903740  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:10.903802  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:10.931733  528764 cri.go:89] found id: ""
	I1217 20:37:10.931747  528764 logs.go:282] 0 containers: []
	W1217 20:37:10.931754  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:10.931759  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:10.931818  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:10.957707  528764 cri.go:89] found id: ""
	I1217 20:37:10.957722  528764 logs.go:282] 0 containers: []
	W1217 20:37:10.957729  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:10.957735  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:10.957793  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:10.986438  528764 cri.go:89] found id: ""
	I1217 20:37:10.986452  528764 logs.go:282] 0 containers: []
	W1217 20:37:10.986459  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:10.986464  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:10.986530  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:11.014361  528764 cri.go:89] found id: ""
	I1217 20:37:11.014385  528764 logs.go:282] 0 containers: []
	W1217 20:37:11.014393  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:11.014402  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:11.014462  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:11.041366  528764 cri.go:89] found id: ""
	I1217 20:37:11.041381  528764 logs.go:282] 0 containers: []
	W1217 20:37:11.041388  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:11.041401  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:11.041411  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:11.056502  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:11.056519  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:11.122467  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:11.114176   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.114734   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.116369   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.116862   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.118392   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:11.114176   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.114734   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.116369   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.116862   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.118392   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:11.122477  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:11.122486  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:11.190244  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:11.190265  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:11.220700  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:11.220717  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:13.792757  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:13.802840  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:13.802899  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:13.836386  528764 cri.go:89] found id: ""
	I1217 20:37:13.836401  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.836408  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:13.836415  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:13.836471  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:13.870570  528764 cri.go:89] found id: ""
	I1217 20:37:13.870585  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.870592  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:13.870597  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:13.870656  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:13.898823  528764 cri.go:89] found id: ""
	I1217 20:37:13.898837  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.898845  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:13.898850  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:13.898908  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:13.926200  528764 cri.go:89] found id: ""
	I1217 20:37:13.926214  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.926221  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:13.926226  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:13.926284  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:13.952625  528764 cri.go:89] found id: ""
	I1217 20:37:13.952639  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.952647  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:13.952652  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:13.952711  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:13.978517  528764 cri.go:89] found id: ""
	I1217 20:37:13.978531  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.978539  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:13.978544  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:13.978602  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:14.010201  528764 cri.go:89] found id: ""
	I1217 20:37:14.010215  528764 logs.go:282] 0 containers: []
	W1217 20:37:14.010223  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:14.010231  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:14.010242  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:14.075917  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:14.075936  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:14.091123  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:14.091142  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:14.155624  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:14.146305   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.147214   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.149124   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.149628   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.151335   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:14.146305   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.147214   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.149124   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.149628   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.151335   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:14.155636  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:14.155647  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:14.224215  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:14.224237  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:16.756286  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:16.766692  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:16.766752  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:16.795671  528764 cri.go:89] found id: ""
	I1217 20:37:16.795692  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.795700  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:16.795705  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:16.795762  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:16.829850  528764 cri.go:89] found id: ""
	I1217 20:37:16.829863  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.829870  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:16.829875  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:16.829932  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:16.860495  528764 cri.go:89] found id: ""
	I1217 20:37:16.860509  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.860516  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:16.860521  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:16.860580  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:16.888120  528764 cri.go:89] found id: ""
	I1217 20:37:16.888133  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.888141  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:16.888146  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:16.888201  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:16.918449  528764 cri.go:89] found id: ""
	I1217 20:37:16.918463  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.918469  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:16.918484  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:16.918542  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:16.948626  528764 cri.go:89] found id: ""
	I1217 20:37:16.948652  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.948659  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:16.948665  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:16.948729  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:16.977608  528764 cri.go:89] found id: ""
	I1217 20:37:16.977622  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.977630  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:16.977637  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:16.977647  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:17.042493  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:17.042513  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:17.057131  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:17.057148  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:17.125378  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:17.116772   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.117492   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.119295   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.119699   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.121218   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:17.116772   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.117492   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.119295   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.119699   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.121218   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:17.125389  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:17.125400  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:17.192802  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:17.192822  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:19.720869  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:19.730761  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:19.730822  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:19.757595  528764 cri.go:89] found id: ""
	I1217 20:37:19.757609  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.757617  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:19.757622  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:19.757679  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:19.783074  528764 cri.go:89] found id: ""
	I1217 20:37:19.783087  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.783102  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:19.783108  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:19.783165  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:19.810405  528764 cri.go:89] found id: ""
	I1217 20:37:19.810419  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.810426  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:19.810432  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:19.810493  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:19.837744  528764 cri.go:89] found id: ""
	I1217 20:37:19.837758  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.837766  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:19.837771  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:19.837828  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:19.873857  528764 cri.go:89] found id: ""
	I1217 20:37:19.873872  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.873879  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:19.873884  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:19.873952  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:19.902376  528764 cri.go:89] found id: ""
	I1217 20:37:19.902390  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.902397  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:19.902402  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:19.902477  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:19.928530  528764 cri.go:89] found id: ""
	I1217 20:37:19.928544  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.928552  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:19.928559  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:19.928570  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:19.993175  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:19.984569   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.985337   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.986955   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.987601   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.989181   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:19.984569   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.985337   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.986955   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.987601   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.989181   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:19.993185  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:19.993196  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:20.066305  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:20.066326  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:20.099789  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:20.099806  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:20.165283  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:20.165304  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:22.681290  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:22.691134  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:22.691202  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:22.723831  528764 cri.go:89] found id: ""
	I1217 20:37:22.723845  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.723862  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:22.723868  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:22.723933  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:22.749315  528764 cri.go:89] found id: ""
	I1217 20:37:22.749329  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.749336  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:22.749341  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:22.749396  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:22.773712  528764 cri.go:89] found id: ""
	I1217 20:37:22.773738  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.773746  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:22.773751  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:22.773825  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:22.799128  528764 cri.go:89] found id: ""
	I1217 20:37:22.799147  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.799154  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:22.799159  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:22.799214  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:22.830333  528764 cri.go:89] found id: ""
	I1217 20:37:22.830347  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.830354  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:22.830359  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:22.830414  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:22.857658  528764 cri.go:89] found id: ""
	I1217 20:37:22.857671  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.857678  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:22.857683  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:22.857740  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:22.892187  528764 cri.go:89] found id: ""
	I1217 20:37:22.892202  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.892209  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:22.892217  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:22.892226  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:22.963552  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:22.963572  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:22.992259  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:22.992274  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:23.058615  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:23.058636  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:23.073409  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:23.073442  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:23.138641  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:23.129846   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.130677   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.132360   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.132912   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.134580   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:23.129846   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.130677   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.132360   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.132912   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.134580   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:25.638919  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:25.648946  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:25.649032  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:25.678111  528764 cri.go:89] found id: ""
	I1217 20:37:25.678127  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.678134  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:25.678140  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:25.678230  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:25.704834  528764 cri.go:89] found id: ""
	I1217 20:37:25.704848  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.704855  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:25.704861  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:25.704943  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:25.731274  528764 cri.go:89] found id: ""
	I1217 20:37:25.731287  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.731295  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:25.731300  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:25.731354  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:25.756601  528764 cri.go:89] found id: ""
	I1217 20:37:25.756615  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.756622  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:25.756628  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:25.756689  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:25.781743  528764 cri.go:89] found id: ""
	I1217 20:37:25.781757  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.781764  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:25.781787  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:25.781846  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:25.810686  528764 cri.go:89] found id: ""
	I1217 20:37:25.810699  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.810718  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:25.810724  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:25.810791  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:25.861184  528764 cri.go:89] found id: ""
	I1217 20:37:25.861200  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.861207  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:25.861215  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:25.861237  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:25.937980  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:25.938000  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:25.953961  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:25.953980  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:26.020362  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:26.011417   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.012115   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.013886   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.014423   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.016131   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:26.011417   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.012115   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.013886   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.014423   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.016131   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:26.020376  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:26.020387  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:26.092647  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:26.092669  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:28.622440  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:28.632675  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:28.632735  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:28.657198  528764 cri.go:89] found id: ""
	I1217 20:37:28.657213  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.657220  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:28.657226  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:28.657284  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:28.683432  528764 cri.go:89] found id: ""
	I1217 20:37:28.683446  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.683453  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:28.683458  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:28.683513  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:28.708948  528764 cri.go:89] found id: ""
	I1217 20:37:28.708962  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.708969  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:28.708975  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:28.709030  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:28.738615  528764 cri.go:89] found id: ""
	I1217 20:37:28.738629  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.738637  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:28.738642  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:28.738697  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:28.764458  528764 cri.go:89] found id: ""
	I1217 20:37:28.764472  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.764479  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:28.764484  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:28.764544  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:28.789220  528764 cri.go:89] found id: ""
	I1217 20:37:28.789234  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.789242  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:28.789247  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:28.789302  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:28.813820  528764 cri.go:89] found id: ""
	I1217 20:37:28.813835  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.813841  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:28.813848  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:28.813869  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:28.896349  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:28.887880   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.888593   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.890336   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.890868   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.892434   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:28.887880   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.888593   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.890336   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.890868   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.892434   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:28.896359  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:28.896369  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:28.964976  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:28.964996  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:28.995089  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:28.995105  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:29.073565  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:29.073593  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:31.589038  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:31.599070  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:31.599131  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:31.624604  528764 cri.go:89] found id: ""
	I1217 20:37:31.624619  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.624626  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:31.624631  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:31.624688  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:31.650593  528764 cri.go:89] found id: ""
	I1217 20:37:31.650608  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.650616  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:31.650621  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:31.650684  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:31.679069  528764 cri.go:89] found id: ""
	I1217 20:37:31.679084  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.679091  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:31.679096  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:31.679153  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:31.709079  528764 cri.go:89] found id: ""
	I1217 20:37:31.709093  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.709100  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:31.709105  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:31.709162  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:31.740223  528764 cri.go:89] found id: ""
	I1217 20:37:31.740237  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.740244  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:31.740252  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:31.740307  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:31.771855  528764 cri.go:89] found id: ""
	I1217 20:37:31.771869  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.771877  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:31.771883  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:31.771942  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:31.798992  528764 cri.go:89] found id: ""
	I1217 20:37:31.799006  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.799013  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:31.799021  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:31.799031  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:31.876265  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:31.876285  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:31.912678  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:31.912694  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:31.979473  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:31.979494  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:31.994138  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:31.994154  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:32.058919  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:32.050836   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.051662   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.053342   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.053690   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.055195   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:32.050836   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.051662   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.053342   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.053690   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.055195   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:34.560573  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:34.570410  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:34.570477  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:34.595394  528764 cri.go:89] found id: ""
	I1217 20:37:34.595407  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.595415  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:34.595420  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:34.595474  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:34.620347  528764 cri.go:89] found id: ""
	I1217 20:37:34.620362  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.620376  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:34.620382  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:34.620444  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:34.646173  528764 cri.go:89] found id: ""
	I1217 20:37:34.646188  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.646195  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:34.646200  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:34.646259  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:34.675076  528764 cri.go:89] found id: ""
	I1217 20:37:34.675090  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.675098  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:34.675103  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:34.675160  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:34.700382  528764 cri.go:89] found id: ""
	I1217 20:37:34.700396  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.700403  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:34.700414  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:34.700479  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:34.727372  528764 cri.go:89] found id: ""
	I1217 20:37:34.727387  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.727394  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:34.727400  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:34.727456  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:34.753290  528764 cri.go:89] found id: ""
	I1217 20:37:34.753305  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.753312  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:34.753319  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:34.753331  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:34.782001  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:34.782019  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:34.847492  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:34.847511  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:34.863498  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:34.863515  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:34.939936  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:34.931817   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.932582   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.934298   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.934763   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.936241   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:34.931817   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.932582   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.934298   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.934763   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.936241   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:34.939947  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:34.939958  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:37.511892  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:37.522041  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:37.522101  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:37.546092  528764 cri.go:89] found id: ""
	I1217 20:37:37.546106  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.546113  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:37.546119  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:37.546179  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:37.571827  528764 cri.go:89] found id: ""
	I1217 20:37:37.571841  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.571848  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:37.571853  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:37.571912  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:37.597752  528764 cri.go:89] found id: ""
	I1217 20:37:37.597766  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.597774  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:37.597779  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:37.597840  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:37.624088  528764 cri.go:89] found id: ""
	I1217 20:37:37.624102  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.624109  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:37.624114  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:37.624170  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:37.651097  528764 cri.go:89] found id: ""
	I1217 20:37:37.651112  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.651119  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:37.651125  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:37.651188  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:37.678706  528764 cri.go:89] found id: ""
	I1217 20:37:37.678720  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.678728  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:37.678743  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:37.678804  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:37.705805  528764 cri.go:89] found id: ""
	I1217 20:37:37.705817  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.705825  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:37.705833  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:37.705844  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:37.721021  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:37.721041  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:37.788297  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:37.779817   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.780331   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.782117   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.782612   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.784219   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:37.779817   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.780331   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.782117   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.782612   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.784219   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:37.788308  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:37.788318  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:37.865227  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:37.865247  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:37.897290  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:37.897308  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:40.462446  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:40.472823  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:40.472885  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:40.502899  528764 cri.go:89] found id: ""
	I1217 20:37:40.502914  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.502926  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:40.502931  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:40.502988  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:40.528131  528764 cri.go:89] found id: ""
	I1217 20:37:40.528144  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.528151  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:40.528156  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:40.528214  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:40.552632  528764 cri.go:89] found id: ""
	I1217 20:37:40.552646  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.552653  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:40.552659  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:40.552715  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:40.578013  528764 cri.go:89] found id: ""
	I1217 20:37:40.578028  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.578035  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:40.578042  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:40.578100  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:40.604172  528764 cri.go:89] found id: ""
	I1217 20:37:40.604186  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.604193  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:40.604198  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:40.604253  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:40.629837  528764 cri.go:89] found id: ""
	I1217 20:37:40.629851  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.629867  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:40.629872  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:40.629931  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:40.656555  528764 cri.go:89] found id: ""
	I1217 20:37:40.656568  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.656576  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:40.656583  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:40.656593  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:40.670930  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:40.670946  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:40.736814  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:40.728916   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.729413   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.731058   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.731457   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.733254   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:40.728916   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.729413   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.731058   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.731457   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.733254   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:40.736824  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:40.736835  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:40.803782  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:40.803800  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:40.851556  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:40.851572  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:43.430627  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:43.440939  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:43.441000  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:43.470749  528764 cri.go:89] found id: ""
	I1217 20:37:43.470764  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.470771  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:43.470777  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:43.470833  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:43.495753  528764 cri.go:89] found id: ""
	I1217 20:37:43.495766  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.495774  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:43.495779  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:43.495836  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:43.521880  528764 cri.go:89] found id: ""
	I1217 20:37:43.521896  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.521903  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:43.521908  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:43.521971  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:43.547990  528764 cri.go:89] found id: ""
	I1217 20:37:43.548004  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.548012  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:43.548018  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:43.548080  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:43.576401  528764 cri.go:89] found id: ""
	I1217 20:37:43.576415  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.576422  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:43.576427  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:43.576485  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:43.604828  528764 cri.go:89] found id: ""
	I1217 20:37:43.604840  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.604848  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:43.604853  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:43.604909  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:43.636907  528764 cri.go:89] found id: ""
	I1217 20:37:43.636920  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.636927  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:43.636935  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:43.636945  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:43.701148  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:43.701165  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:43.715342  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:43.715357  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:43.787937  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:43.780156   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.780601   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.782216   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.782718   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.784167   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:43.780156   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.780601   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.782216   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.782718   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.784167   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:43.787957  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:43.787968  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:43.858959  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:43.858978  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:46.395799  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:46.406118  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:46.406190  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:46.433062  528764 cri.go:89] found id: ""
	I1217 20:37:46.433076  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.433083  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:46.433089  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:46.433151  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:46.459553  528764 cri.go:89] found id: ""
	I1217 20:37:46.459568  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.459575  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:46.459604  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:46.459668  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:46.484831  528764 cri.go:89] found id: ""
	I1217 20:37:46.484845  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.484853  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:46.484858  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:46.484920  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:46.509669  528764 cri.go:89] found id: ""
	I1217 20:37:46.509683  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.509690  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:46.509695  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:46.509752  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:46.534227  528764 cri.go:89] found id: ""
	I1217 20:37:46.534242  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.534254  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:46.534260  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:46.534316  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:46.563383  528764 cri.go:89] found id: ""
	I1217 20:37:46.563397  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.563405  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:46.563411  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:46.563476  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:46.589321  528764 cri.go:89] found id: ""
	I1217 20:37:46.589335  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.589342  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:46.589350  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:46.589364  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:46.654894  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:46.654914  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:46.669806  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:46.669822  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:46.731726  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:46.723562   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.724128   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.725849   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.726343   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.727890   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:46.723562   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.724128   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.725849   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.726343   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.727890   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:46.731737  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:46.731763  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:46.799300  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:46.799320  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:49.348034  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:49.358157  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:49.358218  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:49.382823  528764 cri.go:89] found id: ""
	I1217 20:37:49.382837  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.382844  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:49.382849  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:49.382917  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:49.409079  528764 cri.go:89] found id: ""
	I1217 20:37:49.409094  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.409101  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:49.409106  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:49.409162  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:49.434313  528764 cri.go:89] found id: ""
	I1217 20:37:49.434327  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.434340  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:49.434354  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:49.434426  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:49.460512  528764 cri.go:89] found id: ""
	I1217 20:37:49.460527  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.460535  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:49.460551  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:49.460609  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:49.486735  528764 cri.go:89] found id: ""
	I1217 20:37:49.486748  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.486756  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:49.486762  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:49.486830  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:49.512071  528764 cri.go:89] found id: ""
	I1217 20:37:49.512085  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.512092  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:49.512098  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:49.512155  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:49.541263  528764 cri.go:89] found id: ""
	I1217 20:37:49.541277  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.541284  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:49.541293  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:49.541310  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:49.570361  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:49.570378  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:49.638598  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:49.638618  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:49.653362  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:49.653381  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:49.715767  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:49.708109   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.708580   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.710179   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.710574   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.712039   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:49.708109   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.708580   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.710179   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.710574   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.712039   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:49.715778  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:49.715788  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:52.283800  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:52.293434  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:52.293494  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:52.318791  528764 cri.go:89] found id: ""
	I1217 20:37:52.318805  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.318812  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:52.318818  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:52.318876  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:52.344510  528764 cri.go:89] found id: ""
	I1217 20:37:52.344525  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.344543  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:52.344549  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:52.344607  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:52.369118  528764 cri.go:89] found id: ""
	I1217 20:37:52.369132  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.369140  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:52.369145  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:52.369200  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:52.394333  528764 cri.go:89] found id: ""
	I1217 20:37:52.394346  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.394377  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:52.394383  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:52.394448  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:52.419501  528764 cri.go:89] found id: ""
	I1217 20:37:52.419525  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.419532  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:52.419537  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:52.419626  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:52.448909  528764 cri.go:89] found id: ""
	I1217 20:37:52.448923  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.448930  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:52.448936  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:52.449018  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:52.478490  528764 cri.go:89] found id: ""
	I1217 20:37:52.478513  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.478521  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:52.478529  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:52.478539  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:52.542920  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:52.542939  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:52.558035  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:52.558052  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:52.621690  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:52.613254   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.613953   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.615616   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.615996   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.617541   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:52.613254   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.613953   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.615616   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.615996   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.617541   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:52.621710  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:52.621721  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:52.689051  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:52.689070  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:55.225326  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:55.235484  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:55.235545  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:55.260455  528764 cri.go:89] found id: ""
	I1217 20:37:55.260469  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.260477  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:55.260482  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:55.260542  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:55.285381  528764 cri.go:89] found id: ""
	I1217 20:37:55.285396  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.285404  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:55.285409  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:55.285464  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:55.311167  528764 cri.go:89] found id: ""
	I1217 20:37:55.311181  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.311188  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:55.311194  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:55.311266  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:55.336553  528764 cri.go:89] found id: ""
	I1217 20:37:55.336568  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.336575  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:55.336580  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:55.336636  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:55.362555  528764 cri.go:89] found id: ""
	I1217 20:37:55.362569  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.362576  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:55.362582  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:55.362636  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:55.392446  528764 cri.go:89] found id: ""
	I1217 20:37:55.392460  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.392468  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:55.392473  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:55.392529  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:55.421227  528764 cri.go:89] found id: ""
	I1217 20:37:55.421242  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.421250  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:55.421257  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:55.421267  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:55.452467  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:55.452485  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:55.520333  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:55.520354  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:55.535397  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:55.535423  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:55.600267  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:55.591521   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.592108   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.593636   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.594292   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.596071   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:55.591521   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.592108   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.593636   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.594292   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.596071   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:55.600278  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:55.600290  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:58.172840  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:58.183231  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:58.183290  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:58.207527  528764 cri.go:89] found id: ""
	I1217 20:37:58.207541  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.207548  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:58.207553  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:58.207649  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:58.232533  528764 cri.go:89] found id: ""
	I1217 20:37:58.232547  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.232555  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:58.232559  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:58.232613  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:58.257969  528764 cri.go:89] found id: ""
	I1217 20:37:58.257983  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.257990  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:58.257996  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:58.258051  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:58.283047  528764 cri.go:89] found id: ""
	I1217 20:37:58.283060  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.283067  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:58.283072  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:58.283126  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:58.308494  528764 cri.go:89] found id: ""
	I1217 20:37:58.308508  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.308515  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:58.308521  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:58.308578  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:58.333008  528764 cri.go:89] found id: ""
	I1217 20:37:58.333022  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.333029  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:58.333035  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:58.333087  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:58.363097  528764 cri.go:89] found id: ""
	I1217 20:37:58.363111  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.363118  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:58.363126  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:58.363145  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:58.428415  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:58.419398   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.420110   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.421854   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.422467   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.424133   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:58.419398   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.420110   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.421854   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.422467   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.424133   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:58.428426  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:58.428437  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:58.497159  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:58.497179  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:58.528904  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:58.528921  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:58.594783  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:58.594803  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:01.111545  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:01.123462  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:01.123520  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:01.152472  528764 cri.go:89] found id: ""
	I1217 20:38:01.152487  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.152494  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:01.152499  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:01.152561  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:01.178899  528764 cri.go:89] found id: ""
	I1217 20:38:01.178913  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.178921  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:01.178926  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:01.178983  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:01.206687  528764 cri.go:89] found id: ""
	I1217 20:38:01.206701  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.206709  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:01.206714  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:01.206771  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:01.232497  528764 cri.go:89] found id: ""
	I1217 20:38:01.232511  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.232519  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:01.232524  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:01.232579  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:01.261011  528764 cri.go:89] found id: ""
	I1217 20:38:01.261025  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.261032  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:01.261037  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:01.261098  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:01.286117  528764 cri.go:89] found id: ""
	I1217 20:38:01.286132  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.286150  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:01.286156  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:01.286222  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:01.312040  528764 cri.go:89] found id: ""
	I1217 20:38:01.312055  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.312062  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:01.312069  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:01.312080  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:01.382670  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:01.382692  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:01.414378  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:01.414394  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:01.482999  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:01.483019  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:01.497972  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:01.497987  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:01.566351  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:01.557988   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.558761   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.560515   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.561020   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.562479   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:01.557988   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.558761   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.560515   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.561020   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.562479   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:04.066612  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:04.079947  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:04.080010  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:04.114202  528764 cri.go:89] found id: ""
	I1217 20:38:04.114216  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.114223  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:04.114228  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:04.114294  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:04.144225  528764 cri.go:89] found id: ""
	I1217 20:38:04.144238  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.144246  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:04.144250  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:04.144306  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:04.174041  528764 cri.go:89] found id: ""
	I1217 20:38:04.174055  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.174066  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:04.174072  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:04.174138  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:04.198282  528764 cri.go:89] found id: ""
	I1217 20:38:04.198296  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.198304  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:04.198309  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:04.198381  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:04.223855  528764 cri.go:89] found id: ""
	I1217 20:38:04.223869  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.223888  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:04.223897  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:04.223965  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:04.249576  528764 cri.go:89] found id: ""
	I1217 20:38:04.249592  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.249599  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:04.249604  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:04.249667  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:04.278330  528764 cri.go:89] found id: ""
	I1217 20:38:04.278344  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.278351  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:04.278359  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:04.278369  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:04.346075  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:04.346098  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:04.379272  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:04.379287  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:04.446775  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:04.446795  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:04.461788  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:04.461804  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:04.526831  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:04.519073   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.519534   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.520989   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.521420   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.522964   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:04.519073   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.519534   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.520989   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.521420   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.522964   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:07.028018  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:07.038329  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:07.038394  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:07.070882  528764 cri.go:89] found id: ""
	I1217 20:38:07.070911  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.070919  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:07.070925  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:07.070991  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:07.104836  528764 cri.go:89] found id: ""
	I1217 20:38:07.104850  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.104857  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:07.104863  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:07.104932  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:07.141894  528764 cri.go:89] found id: ""
	I1217 20:38:07.141908  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.141916  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:07.141921  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:07.141990  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:07.169039  528764 cri.go:89] found id: ""
	I1217 20:38:07.169053  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.169061  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:07.169066  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:07.169123  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:07.194478  528764 cri.go:89] found id: ""
	I1217 20:38:07.194501  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.194509  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:07.194514  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:07.194579  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:07.219609  528764 cri.go:89] found id: ""
	I1217 20:38:07.219624  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.219632  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:07.219638  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:07.219705  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:07.243819  528764 cri.go:89] found id: ""
	I1217 20:38:07.243832  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.243840  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:07.243847  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:07.243857  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:07.311464  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:07.311483  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:07.343698  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:07.343751  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:07.410312  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:07.410332  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:07.424918  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:07.424934  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:07.487872  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:07.479467   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.480073   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.481799   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.482374   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.484022   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:07.479467   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.480073   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.481799   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.482374   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.484022   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:09.989569  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:10.015377  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:10.015448  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:10.044563  528764 cri.go:89] found id: ""
	I1217 20:38:10.044582  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.044590  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:10.044596  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:10.044659  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:10.082544  528764 cri.go:89] found id: ""
	I1217 20:38:10.082572  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.082579  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:10.082585  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:10.082655  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:10.111998  528764 cri.go:89] found id: ""
	I1217 20:38:10.112021  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.112028  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:10.112034  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:10.112090  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:10.143847  528764 cri.go:89] found id: ""
	I1217 20:38:10.143875  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.143883  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:10.143888  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:10.143959  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:10.169935  528764 cri.go:89] found id: ""
	I1217 20:38:10.169948  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.169956  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:10.169961  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:10.170035  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:10.199354  528764 cri.go:89] found id: ""
	I1217 20:38:10.199367  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.199389  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:10.199395  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:10.199469  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:10.224921  528764 cri.go:89] found id: ""
	I1217 20:38:10.224934  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.224942  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:10.224950  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:10.224961  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:10.292927  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:10.292947  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:10.321993  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:10.322010  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:10.388855  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:10.388876  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:10.404211  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:10.404228  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:10.466886  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:10.458803   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.459494   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.461201   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.461646   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.463180   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:10.458803   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.459494   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.461201   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.461646   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.463180   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:12.968194  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:12.978084  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:12.978143  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:13.006691  528764 cri.go:89] found id: ""
	I1217 20:38:13.006706  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.006713  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:13.006719  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:13.006779  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:13.032773  528764 cri.go:89] found id: ""
	I1217 20:38:13.032787  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.032795  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:13.032800  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:13.032854  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:13.059128  528764 cri.go:89] found id: ""
	I1217 20:38:13.059142  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.059150  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:13.059155  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:13.059213  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:13.093983  528764 cri.go:89] found id: ""
	I1217 20:38:13.093997  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.094005  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:13.094010  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:13.094066  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:13.136453  528764 cri.go:89] found id: ""
	I1217 20:38:13.136467  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.136474  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:13.136481  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:13.136536  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:13.166382  528764 cri.go:89] found id: ""
	I1217 20:38:13.166396  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.166403  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:13.166409  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:13.166476  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:13.194638  528764 cri.go:89] found id: ""
	I1217 20:38:13.194651  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.194658  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:13.194666  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:13.194689  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:13.261344  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:13.261362  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:13.276057  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:13.276073  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:13.341759  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:13.333469   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.334136   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.335845   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.336372   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.337969   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:13.333469   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.334136   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.335845   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.336372   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.337969   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:13.341769  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:13.341780  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:13.412593  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:13.412613  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:15.945731  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:15.956026  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:15.956085  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:15.980875  528764 cri.go:89] found id: ""
	I1217 20:38:15.980889  528764 logs.go:282] 0 containers: []
	W1217 20:38:15.980897  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:15.980902  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:15.980956  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:16.017238  528764 cri.go:89] found id: ""
	I1217 20:38:16.017253  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.017260  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:16.017265  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:16.017327  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:16.042662  528764 cri.go:89] found id: ""
	I1217 20:38:16.042676  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.042684  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:16.042700  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:16.042759  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:16.070239  528764 cri.go:89] found id: ""
	I1217 20:38:16.070253  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.070265  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:16.070281  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:16.070344  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:16.101763  528764 cri.go:89] found id: ""
	I1217 20:38:16.101777  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.101785  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:16.101802  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:16.101863  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:16.132808  528764 cri.go:89] found id: ""
	I1217 20:38:16.132822  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.132830  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:16.132835  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:16.132904  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:16.162901  528764 cri.go:89] found id: ""
	I1217 20:38:16.162925  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.162932  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:16.162940  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:16.162951  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:16.177475  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:16.177491  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:16.239620  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:16.231437   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.232280   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.233889   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.234228   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.235746   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:16.231437   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.232280   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.233889   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.234228   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.235746   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:16.239630  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:16.239641  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:16.306695  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:16.306714  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:16.338739  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:16.338754  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:18.906627  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:18.916877  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:18.916940  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:18.940995  528764 cri.go:89] found id: ""
	I1217 20:38:18.941009  528764 logs.go:282] 0 containers: []
	W1217 20:38:18.941016  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:18.941022  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:18.941090  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:18.967366  528764 cri.go:89] found id: ""
	I1217 20:38:18.967381  528764 logs.go:282] 0 containers: []
	W1217 20:38:18.967388  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:18.967393  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:18.967448  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:18.993265  528764 cri.go:89] found id: ""
	I1217 20:38:18.993279  528764 logs.go:282] 0 containers: []
	W1217 20:38:18.993286  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:18.993291  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:18.993345  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:19.020582  528764 cri.go:89] found id: ""
	I1217 20:38:19.020595  528764 logs.go:282] 0 containers: []
	W1217 20:38:19.020603  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:19.020608  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:19.020666  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:19.045982  528764 cri.go:89] found id: ""
	I1217 20:38:19.045996  528764 logs.go:282] 0 containers: []
	W1217 20:38:19.046005  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:19.046010  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:19.046069  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:19.073910  528764 cri.go:89] found id: ""
	I1217 20:38:19.073923  528764 logs.go:282] 0 containers: []
	W1217 20:38:19.073930  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:19.073936  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:19.073992  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:19.113478  528764 cri.go:89] found id: ""
	I1217 20:38:19.113491  528764 logs.go:282] 0 containers: []
	W1217 20:38:19.113499  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:19.113507  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:19.113517  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:19.181345  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:19.181364  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:19.196831  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:19.196848  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:19.262885  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:19.253623   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.254429   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.256066   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.256658   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.258445   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:19.253623   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.254429   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.256066   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.256658   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.258445   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:19.262896  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:19.262907  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:19.332927  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:19.332947  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:21.863218  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:21.873488  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:21.873552  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:21.901892  528764 cri.go:89] found id: ""
	I1217 20:38:21.901907  528764 logs.go:282] 0 containers: []
	W1217 20:38:21.901915  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:21.901930  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:21.901988  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:21.928067  528764 cri.go:89] found id: ""
	I1217 20:38:21.928080  528764 logs.go:282] 0 containers: []
	W1217 20:38:21.928087  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:21.928092  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:21.928149  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:21.953356  528764 cri.go:89] found id: ""
	I1217 20:38:21.953371  528764 logs.go:282] 0 containers: []
	W1217 20:38:21.953378  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:21.953383  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:21.953444  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:21.987415  528764 cri.go:89] found id: ""
	I1217 20:38:21.987428  528764 logs.go:282] 0 containers: []
	W1217 20:38:21.987436  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:21.987442  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:21.987509  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:22.016922  528764 cri.go:89] found id: ""
	I1217 20:38:22.016937  528764 logs.go:282] 0 containers: []
	W1217 20:38:22.016945  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:22.016951  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:22.017009  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:22.044463  528764 cri.go:89] found id: ""
	I1217 20:38:22.044477  528764 logs.go:282] 0 containers: []
	W1217 20:38:22.044484  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:22.044490  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:22.044545  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:22.072815  528764 cri.go:89] found id: ""
	I1217 20:38:22.072828  528764 logs.go:282] 0 containers: []
	W1217 20:38:22.072836  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:22.072844  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:22.072854  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:22.106754  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:22.106778  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:22.177000  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:22.177019  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:22.191928  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:22.191945  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:22.254841  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:22.246562   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.247341   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.249143   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.249615   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.251134   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:22.246562   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.247341   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.249143   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.249615   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.251134   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:22.254851  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:22.254862  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:24.826532  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:24.836772  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:24.836836  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:24.862693  528764 cri.go:89] found id: ""
	I1217 20:38:24.862706  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.862714  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:24.862719  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:24.862789  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:24.887641  528764 cri.go:89] found id: ""
	I1217 20:38:24.887656  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.887663  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:24.887668  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:24.887737  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:24.913131  528764 cri.go:89] found id: ""
	I1217 20:38:24.913145  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.913168  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:24.913174  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:24.913242  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:24.939734  528764 cri.go:89] found id: ""
	I1217 20:38:24.939748  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.939755  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:24.939760  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:24.939815  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:24.964904  528764 cri.go:89] found id: ""
	I1217 20:38:24.964919  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.964925  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:24.964930  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:24.964988  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:24.990333  528764 cri.go:89] found id: ""
	I1217 20:38:24.990348  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.990355  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:24.990361  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:24.990421  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:25.019872  528764 cri.go:89] found id: ""
	I1217 20:38:25.019887  528764 logs.go:282] 0 containers: []
	W1217 20:38:25.019895  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:25.019902  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:25.019914  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:25.036413  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:25.036438  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:25.112619  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:25.103911   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.104770   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.106472   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.107045   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.108652   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:25.103911   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.104770   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.106472   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.107045   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.108652   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:25.112632  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:25.112642  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:25.184378  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:25.184399  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:25.216673  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:25.216689  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:27.785567  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:27.796326  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:27.796391  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:27.825782  528764 cri.go:89] found id: ""
	I1217 20:38:27.825796  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.825804  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:27.825809  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:27.825864  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:27.850601  528764 cri.go:89] found id: ""
	I1217 20:38:27.850614  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.850627  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:27.850632  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:27.850700  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:27.876056  528764 cri.go:89] found id: ""
	I1217 20:38:27.876070  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.876082  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:27.876087  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:27.876151  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:27.901899  528764 cri.go:89] found id: ""
	I1217 20:38:27.901913  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.901920  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:27.901926  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:27.901997  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:27.931527  528764 cri.go:89] found id: ""
	I1217 20:38:27.931541  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.931548  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:27.931553  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:27.931627  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:27.956390  528764 cri.go:89] found id: ""
	I1217 20:38:27.956404  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.956411  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:27.956417  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:27.956473  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:27.985929  528764 cri.go:89] found id: ""
	I1217 20:38:27.985943  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.985951  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:27.985959  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:27.985970  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:28.054474  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:28.054492  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:28.070115  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:28.070132  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:28.151327  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:28.142186   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.142985   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.145194   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.145756   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.147299   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:28.142186   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.142985   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.145194   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.145756   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.147299   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:28.151337  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:28.151347  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:28.220518  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:28.220542  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:30.755166  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:30.765287  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:30.765345  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:30.790103  528764 cri.go:89] found id: ""
	I1217 20:38:30.790117  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.790139  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:30.790145  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:30.790209  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:30.815526  528764 cri.go:89] found id: ""
	I1217 20:38:30.815539  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.815547  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:30.815552  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:30.815647  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:30.841851  528764 cri.go:89] found id: ""
	I1217 20:38:30.841864  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.841884  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:30.841890  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:30.841963  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:30.866784  528764 cri.go:89] found id: ""
	I1217 20:38:30.866798  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.866829  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:30.866834  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:30.866922  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:30.892935  528764 cri.go:89] found id: ""
	I1217 20:38:30.892948  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.892956  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:30.892961  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:30.893017  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:30.918525  528764 cri.go:89] found id: ""
	I1217 20:38:30.918545  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.918552  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:30.918558  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:30.918624  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:30.946571  528764 cri.go:89] found id: ""
	I1217 20:38:30.946586  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.946593  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:30.946600  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:30.946620  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:31.016310  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:31.016330  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:31.031710  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:31.031729  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:31.121622  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:31.112851   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.113732   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.115400   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.115997   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.117664   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:31.112851   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.113732   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.115400   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.115997   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.117664   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:31.121632  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:31.121643  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:31.191069  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:31.191089  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:33.724221  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:33.734488  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:33.734549  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:33.761235  528764 cri.go:89] found id: ""
	I1217 20:38:33.761249  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.761256  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:33.761262  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:33.761322  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:33.787337  528764 cri.go:89] found id: ""
	I1217 20:38:33.787350  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.787358  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:33.787363  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:33.787432  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:33.812684  528764 cri.go:89] found id: ""
	I1217 20:38:33.812706  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.812714  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:33.812719  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:33.812784  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:33.842819  528764 cri.go:89] found id: ""
	I1217 20:38:33.842832  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.842854  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:33.842865  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:33.842929  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:33.868875  528764 cri.go:89] found id: ""
	I1217 20:38:33.868889  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.868897  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:33.868902  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:33.868961  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:33.898309  528764 cri.go:89] found id: ""
	I1217 20:38:33.898323  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.898331  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:33.898356  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:33.898425  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:33.924913  528764 cri.go:89] found id: ""
	I1217 20:38:33.924927  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.924935  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:33.924943  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:33.924957  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:33.990911  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:33.990930  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:34.008276  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:34.008297  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:34.087503  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:34.076899   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.077640   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.079660   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.080396   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.083391   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:34.076899   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.077640   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.079660   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.080396   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.083391   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:34.087514  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:34.087537  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:34.163882  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:34.163901  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:36.694644  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:36.704742  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:36.704803  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:36.730340  528764 cri.go:89] found id: ""
	I1217 20:38:36.730354  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.730363  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:36.730369  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:36.730426  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:36.757473  528764 cri.go:89] found id: ""
	I1217 20:38:36.757486  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.757493  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:36.757499  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:36.757554  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:36.786113  528764 cri.go:89] found id: ""
	I1217 20:38:36.786127  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.786135  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:36.786140  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:36.786246  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:36.812385  528764 cri.go:89] found id: ""
	I1217 20:38:36.812399  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.812407  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:36.812412  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:36.812471  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:36.837075  528764 cri.go:89] found id: ""
	I1217 20:38:36.837088  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.837095  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:36.837100  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:36.837156  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:36.866713  528764 cri.go:89] found id: ""
	I1217 20:38:36.866727  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.866734  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:36.866740  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:36.866808  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:36.896063  528764 cri.go:89] found id: ""
	I1217 20:38:36.896078  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.896085  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:36.896093  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:36.896106  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:36.961772  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:36.961793  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:36.976619  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:36.976637  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:37.049152  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:37.040423   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.041223   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.042928   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.043675   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.045309   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:37.040423   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.041223   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.042928   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.043675   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.045309   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:37.049163  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:37.049174  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:37.119769  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:37.119788  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:39.651068  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:39.661185  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:39.661251  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:39.686602  528764 cri.go:89] found id: ""
	I1217 20:38:39.686616  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.686623  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:39.686628  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:39.686685  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:39.711563  528764 cri.go:89] found id: ""
	I1217 20:38:39.711577  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.711602  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:39.711608  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:39.711674  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:39.738013  528764 cri.go:89] found id: ""
	I1217 20:38:39.738027  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.738034  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:39.738039  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:39.738094  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:39.763309  528764 cri.go:89] found id: ""
	I1217 20:38:39.763323  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.763330  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:39.763336  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:39.763396  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:39.788615  528764 cri.go:89] found id: ""
	I1217 20:38:39.788628  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.788640  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:39.788645  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:39.788701  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:39.813921  528764 cri.go:89] found id: ""
	I1217 20:38:39.813935  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.813942  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:39.813948  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:39.814006  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:39.843230  528764 cri.go:89] found id: ""
	I1217 20:38:39.843244  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.843252  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:39.843260  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:39.843271  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:39.857938  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:39.857954  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:39.921708  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:39.913994   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.914417   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.915990   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.916326   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.917797   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:39.913994   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.914417   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.915990   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.916326   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.917797   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:39.921717  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:39.921730  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:39.992421  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:39.992444  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:40.032432  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:40.032451  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:42.605010  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:42.614872  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:42.614934  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:42.639899  528764 cri.go:89] found id: ""
	I1217 20:38:42.639913  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.639920  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:42.639926  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:42.639996  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:42.670021  528764 cri.go:89] found id: ""
	I1217 20:38:42.670036  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.670049  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:42.670055  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:42.670116  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:42.696223  528764 cri.go:89] found id: ""
	I1217 20:38:42.696237  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.696244  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:42.696251  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:42.696310  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:42.722579  528764 cri.go:89] found id: ""
	I1217 20:38:42.722593  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.722606  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:42.722612  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:42.722668  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:42.747677  528764 cri.go:89] found id: ""
	I1217 20:38:42.747690  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.747698  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:42.747703  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:42.747764  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:42.774015  528764 cri.go:89] found id: ""
	I1217 20:38:42.774029  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.774036  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:42.774053  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:42.774112  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:42.799502  528764 cri.go:89] found id: ""
	I1217 20:38:42.799516  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.799525  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:42.799533  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:42.799543  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:42.865035  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:42.865058  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:42.880616  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:42.880633  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:42.949493  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:42.939951   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.940704   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.942455   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.943033   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.944768   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:42.939951   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.940704   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.942455   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.943033   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.944768   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:42.949505  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:42.949528  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:43.019292  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:43.019312  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:45.548705  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:45.558968  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:45.559027  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:45.583967  528764 cri.go:89] found id: ""
	I1217 20:38:45.583982  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.583989  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:45.583994  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:45.584050  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:45.609420  528764 cri.go:89] found id: ""
	I1217 20:38:45.609434  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.609441  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:45.609447  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:45.609508  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:45.640522  528764 cri.go:89] found id: ""
	I1217 20:38:45.640546  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.640554  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:45.640559  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:45.640625  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:45.666349  528764 cri.go:89] found id: ""
	I1217 20:38:45.666362  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.666369  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:45.666375  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:45.666432  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:45.696168  528764 cri.go:89] found id: ""
	I1217 20:38:45.696182  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.696189  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:45.696194  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:45.696255  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:45.719763  528764 cri.go:89] found id: ""
	I1217 20:38:45.719777  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.719784  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:45.719790  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:45.719847  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:45.744391  528764 cri.go:89] found id: ""
	I1217 20:38:45.744405  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.744412  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:45.744421  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:45.744451  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:45.809635  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:45.809656  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:45.824260  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:45.824275  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:45.887725  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:45.879670   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.880327   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.881887   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.882340   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.883862   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:45.879670   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.880327   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.881887   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.882340   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.883862   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:45.887735  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:45.887746  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:45.955422  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:45.955441  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:48.485624  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:48.495313  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:48.495374  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:48.520059  528764 cri.go:89] found id: ""
	I1217 20:38:48.520074  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.520081  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:48.520087  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:48.520143  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:48.545655  528764 cri.go:89] found id: ""
	I1217 20:38:48.545670  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.545677  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:48.545682  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:48.545740  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:48.570521  528764 cri.go:89] found id: ""
	I1217 20:38:48.570535  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.570543  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:48.570548  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:48.570606  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:48.596861  528764 cri.go:89] found id: ""
	I1217 20:38:48.596875  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.596883  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:48.596888  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:48.596946  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:48.623093  528764 cri.go:89] found id: ""
	I1217 20:38:48.623115  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.623123  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:48.623128  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:48.623203  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:48.648854  528764 cri.go:89] found id: ""
	I1217 20:38:48.648868  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.648876  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:48.648881  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:48.648953  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:48.673887  528764 cri.go:89] found id: ""
	I1217 20:38:48.673911  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.673919  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:48.673928  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:48.673939  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:48.739985  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:48.740004  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:48.754655  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:48.754672  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:48.818714  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:48.810661   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.811171   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.812860   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.813319   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.814815   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:48.810661   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.811171   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.812860   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.813319   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.814815   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:48.818724  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:48.818734  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:48.889255  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:48.889281  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:51.421767  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:51.432066  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:51.432137  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:51.461100  528764 cri.go:89] found id: ""
	I1217 20:38:51.461115  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.461123  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:51.461132  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:51.461205  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:51.493482  528764 cri.go:89] found id: ""
	I1217 20:38:51.493495  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.493503  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:51.493508  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:51.493573  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:51.523360  528764 cri.go:89] found id: ""
	I1217 20:38:51.523374  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.523382  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:51.523387  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:51.523443  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:51.549129  528764 cri.go:89] found id: ""
	I1217 20:38:51.549143  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.549151  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:51.549156  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:51.549212  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:51.575573  528764 cri.go:89] found id: ""
	I1217 20:38:51.575613  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.575621  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:51.575631  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:51.575698  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:51.601059  528764 cri.go:89] found id: ""
	I1217 20:38:51.601074  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.601081  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:51.601087  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:51.601153  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:51.626446  528764 cri.go:89] found id: ""
	I1217 20:38:51.626461  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.626468  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:51.626476  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:51.626487  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:51.693973  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:51.693993  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:51.724023  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:51.724039  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:51.788885  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:51.788906  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:51.803552  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:51.803568  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:51.866022  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:51.858220   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.858930   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.860542   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.860857   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.862309   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:51.858220   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.858930   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.860542   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.860857   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.862309   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:54.367685  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:54.378312  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:54.378367  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:54.407726  528764 cri.go:89] found id: ""
	I1217 20:38:54.407744  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.407752  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:54.407758  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:54.407815  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:54.432535  528764 cri.go:89] found id: ""
	I1217 20:38:54.432550  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.432557  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:54.432562  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:54.432623  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:54.458438  528764 cri.go:89] found id: ""
	I1217 20:38:54.458453  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.458460  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:54.458465  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:54.458527  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:54.487170  528764 cri.go:89] found id: ""
	I1217 20:38:54.487184  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.487191  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:54.487198  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:54.487254  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:54.512876  528764 cri.go:89] found id: ""
	I1217 20:38:54.512890  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.512897  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:54.512902  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:54.512959  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:54.537031  528764 cri.go:89] found id: ""
	I1217 20:38:54.537044  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.537051  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:54.537056  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:54.537112  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:54.562349  528764 cri.go:89] found id: ""
	I1217 20:38:54.562363  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.562387  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:54.562396  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:54.562406  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:54.628118  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:54.628137  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:54.642915  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:54.642932  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:54.707130  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:54.699152   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.699635   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.701269   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.701677   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.703119   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:54.699152   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.699635   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.701269   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.701677   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.703119   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:54.707141  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:54.707152  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:54.775317  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:54.775338  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:57.310952  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:57.322922  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:57.322983  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:57.357392  528764 cri.go:89] found id: ""
	I1217 20:38:57.357406  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.357413  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:57.357420  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:57.357476  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:57.384349  528764 cri.go:89] found id: ""
	I1217 20:38:57.384363  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.384373  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:57.384378  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:57.384434  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:57.412576  528764 cri.go:89] found id: ""
	I1217 20:38:57.412590  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.412598  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:57.412603  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:57.412662  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:57.439190  528764 cri.go:89] found id: ""
	I1217 20:38:57.439205  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.439212  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:57.439217  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:57.439305  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:57.466239  528764 cri.go:89] found id: ""
	I1217 20:38:57.466253  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.466262  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:57.466267  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:57.466324  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:57.491495  528764 cri.go:89] found id: ""
	I1217 20:38:57.491508  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.491516  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:57.491522  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:57.491597  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:57.517009  528764 cri.go:89] found id: ""
	I1217 20:38:57.517023  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.517030  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:57.517038  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:57.517048  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:57.582648  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:57.582669  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:57.597231  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:57.597249  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:57.663163  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:57.654987   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.655397   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.656981   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.657561   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.659204   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:57.654987   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.655397   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.656981   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.657561   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.659204   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:57.663174  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:57.663186  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:57.735126  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:57.735151  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:00.265877  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:00.292750  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:00.292841  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:00.342493  528764 cri.go:89] found id: ""
	I1217 20:39:00.342529  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.342553  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:00.342560  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:00.342673  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:00.389833  528764 cri.go:89] found id: ""
	I1217 20:39:00.389858  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.389866  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:00.389871  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:00.389943  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:00.427417  528764 cri.go:89] found id: ""
	I1217 20:39:00.427442  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.427450  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:00.427455  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:00.427525  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:00.455698  528764 cri.go:89] found id: ""
	I1217 20:39:00.455712  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.455720  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:00.455726  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:00.455784  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:00.487535  528764 cri.go:89] found id: ""
	I1217 20:39:00.487551  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.487558  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:00.487576  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:00.487666  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:00.514228  528764 cri.go:89] found id: ""
	I1217 20:39:00.514243  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.514251  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:00.514256  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:00.514315  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:00.540536  528764 cri.go:89] found id: ""
	I1217 20:39:00.540561  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.540569  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:00.540576  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:00.540586  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:00.607064  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:00.607084  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:00.639882  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:00.639899  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:00.705607  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:00.705629  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:00.721491  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:00.721506  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:00.784593  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:00.776120   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.776725   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.778453   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.778972   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.780702   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:00.776120   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.776725   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.778453   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.778972   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.780702   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:03.284822  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:03.295036  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:03.295097  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:03.333750  528764 cri.go:89] found id: ""
	I1217 20:39:03.333778  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.333786  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:03.333792  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:03.333861  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:03.363983  528764 cri.go:89] found id: ""
	I1217 20:39:03.363997  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.364004  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:03.364024  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:03.364082  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:03.392963  528764 cri.go:89] found id: ""
	I1217 20:39:03.392977  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.392984  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:03.392989  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:03.393044  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:03.419023  528764 cri.go:89] found id: ""
	I1217 20:39:03.419039  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.419046  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:03.419052  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:03.419108  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:03.444813  528764 cri.go:89] found id: ""
	I1217 20:39:03.444826  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.444833  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:03.444838  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:03.444895  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:03.468964  528764 cri.go:89] found id: ""
	I1217 20:39:03.468978  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.468986  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:03.468996  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:03.469053  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:03.494050  528764 cri.go:89] found id: ""
	I1217 20:39:03.494063  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.494071  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:03.494078  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:03.494087  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:03.559830  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:03.559849  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:03.575390  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:03.575407  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:03.642132  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:03.634093   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.634724   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.636305   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.636854   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.638302   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:03.634093   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.634724   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.636305   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.636854   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.638302   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:03.642142  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:03.642153  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:03.710317  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:03.710339  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:06.242034  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:06.252695  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:06.252759  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:06.278446  528764 cri.go:89] found id: ""
	I1217 20:39:06.278460  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.278467  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:06.278477  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:06.278573  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:06.304597  528764 cri.go:89] found id: ""
	I1217 20:39:06.304612  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.304620  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:06.304630  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:06.304702  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:06.345678  528764 cri.go:89] found id: ""
	I1217 20:39:06.345693  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.345700  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:06.345706  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:06.345764  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:06.381455  528764 cri.go:89] found id: ""
	I1217 20:39:06.381469  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.381476  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:06.381482  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:06.381542  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:06.410677  528764 cri.go:89] found id: ""
	I1217 20:39:06.410691  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.410698  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:06.410704  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:06.410774  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:06.436535  528764 cri.go:89] found id: ""
	I1217 20:39:06.436549  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.436556  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:06.436564  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:06.436621  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:06.467306  528764 cri.go:89] found id: ""
	I1217 20:39:06.467320  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.467327  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:06.467335  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:06.467345  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:06.533557  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:06.533577  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:06.548883  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:06.548901  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:06.613032  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:06.604590   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.605314   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.606990   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.607539   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.609092   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:06.604590   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.605314   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.606990   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.607539   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.609092   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:06.613048  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:06.613068  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:06.682237  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:06.682258  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:09.211382  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:09.221300  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:09.221359  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:09.246764  528764 cri.go:89] found id: ""
	I1217 20:39:09.246778  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.246785  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:09.246790  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:09.246867  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:09.271248  528764 cri.go:89] found id: ""
	I1217 20:39:09.271261  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.271268  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:09.271273  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:09.271343  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:09.296093  528764 cri.go:89] found id: ""
	I1217 20:39:09.296107  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.296114  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:09.296120  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:09.296175  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:09.325215  528764 cri.go:89] found id: ""
	I1217 20:39:09.325230  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.325236  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:09.325241  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:09.325304  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:09.352141  528764 cri.go:89] found id: ""
	I1217 20:39:09.352155  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.352162  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:09.352167  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:09.352237  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:09.383006  528764 cri.go:89] found id: ""
	I1217 20:39:09.383021  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.383028  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:09.383034  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:09.383113  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:09.414504  528764 cri.go:89] found id: ""
	I1217 20:39:09.414518  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.414526  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:09.414534  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:09.414566  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:09.483870  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:09.483889  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:09.498851  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:09.498867  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:09.569431  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:09.561559   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.562122   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.563640   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.564216   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.565635   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:09.561559   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.562122   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.563640   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.564216   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.565635   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:09.569442  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:09.569452  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:09.636946  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:09.636966  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:12.165906  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:12.176117  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:12.176184  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:12.202030  528764 cri.go:89] found id: ""
	I1217 20:39:12.202043  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.202051  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:12.202056  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:12.202111  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:12.230473  528764 cri.go:89] found id: ""
	I1217 20:39:12.230487  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.230495  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:12.230500  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:12.230559  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:12.256663  528764 cri.go:89] found id: ""
	I1217 20:39:12.256677  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.256685  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:12.256690  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:12.256747  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:12.284083  528764 cri.go:89] found id: ""
	I1217 20:39:12.284096  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.284104  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:12.284109  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:12.284168  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:12.309047  528764 cri.go:89] found id: ""
	I1217 20:39:12.309062  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.309070  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:12.309075  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:12.309134  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:12.351942  528764 cri.go:89] found id: ""
	I1217 20:39:12.351957  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.351969  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:12.351975  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:12.352034  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:12.390734  528764 cri.go:89] found id: ""
	I1217 20:39:12.390765  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.390773  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:12.390782  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:12.390793  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:12.456083  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:12.456103  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:12.471218  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:12.471239  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:12.538690  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:12.527207   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.527702   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.529370   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.529843   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.531751   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:12.527207   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.527702   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.529370   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.529843   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.531751   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:12.538707  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:12.538718  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:12.605751  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:12.605772  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:15.135835  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:15.146221  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:15.146280  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:15.176272  528764 cri.go:89] found id: ""
	I1217 20:39:15.176286  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.176294  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:15.176301  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:15.176357  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:15.206452  528764 cri.go:89] found id: ""
	I1217 20:39:15.206466  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.206474  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:15.206479  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:15.206548  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:15.231899  528764 cri.go:89] found id: ""
	I1217 20:39:15.231914  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.231921  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:15.231927  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:15.231996  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:15.257093  528764 cri.go:89] found id: ""
	I1217 20:39:15.257106  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.257113  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:15.257119  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:15.257174  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:15.281692  528764 cri.go:89] found id: ""
	I1217 20:39:15.281706  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.281714  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:15.281719  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:15.281777  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:15.310093  528764 cri.go:89] found id: ""
	I1217 20:39:15.310107  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.310114  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:15.310119  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:15.310193  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:15.349800  528764 cri.go:89] found id: ""
	I1217 20:39:15.349813  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.349830  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:15.349839  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:15.349850  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:15.426883  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:15.426904  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:15.442044  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:15.442059  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:15.512531  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:15.503665   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.504418   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.506067   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.506537   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.508077   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:15.503665   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.504418   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.506067   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.506537   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.508077   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:15.512542  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:15.512554  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:15.587396  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:15.587422  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:18.121184  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:18.131563  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:18.131644  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:18.157091  528764 cri.go:89] found id: ""
	I1217 20:39:18.157105  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.157113  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:18.157118  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:18.157177  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:18.183414  528764 cri.go:89] found id: ""
	I1217 20:39:18.183428  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.183452  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:18.183457  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:18.183523  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:18.210558  528764 cri.go:89] found id: ""
	I1217 20:39:18.210586  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.210595  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:18.210600  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:18.210667  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:18.236623  528764 cri.go:89] found id: ""
	I1217 20:39:18.236653  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.236661  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:18.236666  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:18.236730  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:18.263889  528764 cri.go:89] found id: ""
	I1217 20:39:18.263903  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.263911  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:18.263916  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:18.263977  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:18.289661  528764 cri.go:89] found id: ""
	I1217 20:39:18.289675  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.289683  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:18.289688  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:18.289743  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:18.314115  528764 cri.go:89] found id: ""
	I1217 20:39:18.314129  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.314136  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:18.314143  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:18.314165  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:18.382890  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:18.382909  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:18.425251  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:18.425268  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:18.493317  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:18.493336  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:18.509454  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:18.509470  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:18.571731  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:18.563250   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.563867   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.564819   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.566275   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.566732   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:18.563250   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.563867   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.564819   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.566275   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.566732   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:21.073445  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:21.083815  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:21.083874  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:21.113281  528764 cri.go:89] found id: ""
	I1217 20:39:21.113295  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.113302  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:21.113307  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:21.113365  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:21.142024  528764 cri.go:89] found id: ""
	I1217 20:39:21.142039  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.142046  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:21.142059  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:21.142123  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:21.170658  528764 cri.go:89] found id: ""
	I1217 20:39:21.170678  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.170686  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:21.170691  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:21.170756  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:21.196194  528764 cri.go:89] found id: ""
	I1217 20:39:21.196207  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.196214  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:21.196220  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:21.196277  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:21.222255  528764 cri.go:89] found id: ""
	I1217 20:39:21.222269  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.222276  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:21.222282  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:21.222355  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:21.247912  528764 cri.go:89] found id: ""
	I1217 20:39:21.247926  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.247933  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:21.247939  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:21.247996  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:21.278136  528764 cri.go:89] found id: ""
	I1217 20:39:21.278151  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.278158  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:21.278175  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:21.278187  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:21.346881  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:21.346899  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:21.363101  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:21.363117  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:21.431000  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:21.421965   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.422735   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.424535   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.425238   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.426896   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:21.421965   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.422735   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.424535   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.425238   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.426896   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:21.431011  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:21.431024  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:21.499494  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:21.499512  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:24.028859  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:24.039467  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:24.039528  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:24.065108  528764 cri.go:89] found id: ""
	I1217 20:39:24.065122  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.065130  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:24.065135  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:24.065193  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:24.090624  528764 cri.go:89] found id: ""
	I1217 20:39:24.090638  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.090647  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:24.090652  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:24.090710  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:24.116315  528764 cri.go:89] found id: ""
	I1217 20:39:24.116331  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.116339  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:24.116345  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:24.116414  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:24.141792  528764 cri.go:89] found id: ""
	I1217 20:39:24.141806  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.141813  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:24.141818  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:24.141877  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:24.170297  528764 cri.go:89] found id: ""
	I1217 20:39:24.170310  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.170318  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:24.170324  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:24.170378  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:24.199383  528764 cri.go:89] found id: ""
	I1217 20:39:24.199397  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.199404  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:24.199411  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:24.199477  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:24.224443  528764 cri.go:89] found id: ""
	I1217 20:39:24.224457  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.224464  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:24.224471  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:24.224496  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:24.253379  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:24.253396  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:24.322404  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:24.322423  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:24.340551  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:24.340569  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:24.409290  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:24.400697   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.401399   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.402586   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.403250   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.404963   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:24.400697   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.401399   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.402586   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.403250   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.404963   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:24.409305  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:24.409316  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:26.976820  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:26.986804  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:26.986885  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:27.015438  528764 cri.go:89] found id: ""
	I1217 20:39:27.015453  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.015460  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:27.015466  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:27.015545  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:27.041591  528764 cri.go:89] found id: ""
	I1217 20:39:27.041605  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.041613  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:27.041619  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:27.041680  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:27.066798  528764 cri.go:89] found id: ""
	I1217 20:39:27.066812  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.066819  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:27.066851  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:27.066908  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:27.091716  528764 cri.go:89] found id: ""
	I1217 20:39:27.091730  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.091737  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:27.091743  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:27.091797  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:27.116523  528764 cri.go:89] found id: ""
	I1217 20:39:27.116536  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.116544  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:27.116550  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:27.116612  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:27.140982  528764 cri.go:89] found id: ""
	I1217 20:39:27.140996  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.141004  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:27.141009  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:27.141064  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:27.170754  528764 cri.go:89] found id: ""
	I1217 20:39:27.170769  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.170777  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:27.170784  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:27.170805  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:27.234403  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:27.226295   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.226681   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.228247   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.228832   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.230386   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:27.226295   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.226681   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.228247   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.228832   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.230386   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:27.234413  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:27.234463  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:27.306551  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:27.306570  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:27.342575  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:27.342597  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:27.416305  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:27.416325  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:29.931568  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:29.941696  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:29.941790  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:29.970561  528764 cri.go:89] found id: ""
	I1217 20:39:29.970576  528764 logs.go:282] 0 containers: []
	W1217 20:39:29.970583  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:29.970588  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:29.970644  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:29.995538  528764 cri.go:89] found id: ""
	I1217 20:39:29.995551  528764 logs.go:282] 0 containers: []
	W1217 20:39:29.995559  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:29.995564  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:29.995645  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:30.047472  528764 cri.go:89] found id: ""
	I1217 20:39:30.047487  528764 logs.go:282] 0 containers: []
	W1217 20:39:30.047496  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:30.047501  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:30.047568  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:30.077580  528764 cri.go:89] found id: ""
	I1217 20:39:30.077595  528764 logs.go:282] 0 containers: []
	W1217 20:39:30.077603  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:30.077609  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:30.077686  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:30.111544  528764 cri.go:89] found id: ""
	I1217 20:39:30.111574  528764 logs.go:282] 0 containers: []
	W1217 20:39:30.111618  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:30.111624  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:30.111705  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:30.139478  528764 cri.go:89] found id: ""
	I1217 20:39:30.139504  528764 logs.go:282] 0 containers: []
	W1217 20:39:30.139513  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:30.139518  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:30.139611  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:30.169107  528764 cri.go:89] found id: ""
	I1217 20:39:30.169121  528764 logs.go:282] 0 containers: []
	W1217 20:39:30.169128  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:30.169136  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:30.169146  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:30.234963  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:30.234982  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:30.250550  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:30.250577  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:30.320870  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:30.310827   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.311705   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.313455   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.314025   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.315795   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:30.310827   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.311705   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.313455   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.314025   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.315795   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:30.320884  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:30.320894  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:30.397776  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:30.397796  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:32.932751  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:32.942813  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:32.942885  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:32.968405  528764 cri.go:89] found id: ""
	I1217 20:39:32.968418  528764 logs.go:282] 0 containers: []
	W1217 20:39:32.968425  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:32.968431  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:32.968503  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:32.991973  528764 cri.go:89] found id: ""
	I1217 20:39:32.991987  528764 logs.go:282] 0 containers: []
	W1217 20:39:32.991994  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:32.992005  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:32.992063  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:33.019478  528764 cri.go:89] found id: ""
	I1217 20:39:33.019492  528764 logs.go:282] 0 containers: []
	W1217 20:39:33.019500  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:33.019505  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:33.019572  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:33.044942  528764 cri.go:89] found id: ""
	I1217 20:39:33.044958  528764 logs.go:282] 0 containers: []
	W1217 20:39:33.044965  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:33.044970  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:33.045028  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:33.072242  528764 cri.go:89] found id: ""
	I1217 20:39:33.072256  528764 logs.go:282] 0 containers: []
	W1217 20:39:33.072263  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:33.072268  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:33.072332  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:33.101598  528764 cri.go:89] found id: ""
	I1217 20:39:33.101611  528764 logs.go:282] 0 containers: []
	W1217 20:39:33.101619  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:33.101624  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:33.101677  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:33.127765  528764 cri.go:89] found id: ""
	I1217 20:39:33.127780  528764 logs.go:282] 0 containers: []
	W1217 20:39:33.127805  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:33.127813  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:33.127830  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:33.193505  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:33.193524  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:33.209404  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:33.209419  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:33.278213  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:33.269512   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.270341   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.272086   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.272605   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.274151   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:33.269512   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.270341   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.272086   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.272605   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.274151   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:33.278224  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:33.278234  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:33.352890  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:33.352911  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:35.892717  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:35.902865  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:35.902923  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:35.927963  528764 cri.go:89] found id: ""
	I1217 20:39:35.927977  528764 logs.go:282] 0 containers: []
	W1217 20:39:35.927985  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:35.927990  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:35.928047  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:35.953995  528764 cri.go:89] found id: ""
	I1217 20:39:35.954010  528764 logs.go:282] 0 containers: []
	W1217 20:39:35.954017  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:35.954022  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:35.954078  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:35.978944  528764 cri.go:89] found id: ""
	I1217 20:39:35.978958  528764 logs.go:282] 0 containers: []
	W1217 20:39:35.978965  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:35.978971  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:35.979027  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:36.009908  528764 cri.go:89] found id: ""
	I1217 20:39:36.009923  528764 logs.go:282] 0 containers: []
	W1217 20:39:36.009932  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:36.009938  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:36.010005  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:36.036093  528764 cri.go:89] found id: ""
	I1217 20:39:36.036106  528764 logs.go:282] 0 containers: []
	W1217 20:39:36.036114  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:36.036125  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:36.036189  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:36.064858  528764 cri.go:89] found id: ""
	I1217 20:39:36.064873  528764 logs.go:282] 0 containers: []
	W1217 20:39:36.064880  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:36.064888  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:36.064943  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:36.091213  528764 cri.go:89] found id: ""
	I1217 20:39:36.091228  528764 logs.go:282] 0 containers: []
	W1217 20:39:36.091236  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:36.091243  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:36.091265  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:36.123131  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:36.123147  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:36.192190  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:36.192209  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:36.207423  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:36.207441  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:36.274672  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:36.265622   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.266359   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.267351   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.268947   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.269621   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:36.265622   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.266359   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.267351   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.268947   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.269621   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:36.274682  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:36.274693  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:38.848137  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:38.858186  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:38.858245  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:38.887476  528764 cri.go:89] found id: ""
	I1217 20:39:38.887491  528764 logs.go:282] 0 containers: []
	W1217 20:39:38.887498  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:38.887503  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:38.887559  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:38.913669  528764 cri.go:89] found id: ""
	I1217 20:39:38.913683  528764 logs.go:282] 0 containers: []
	W1217 20:39:38.913691  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:38.913696  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:38.913753  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:38.938922  528764 cri.go:89] found id: ""
	I1217 20:39:38.938937  528764 logs.go:282] 0 containers: []
	W1217 20:39:38.938945  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:38.938950  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:38.939010  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:38.964782  528764 cri.go:89] found id: ""
	I1217 20:39:38.964796  528764 logs.go:282] 0 containers: []
	W1217 20:39:38.964804  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:38.964809  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:38.964869  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:38.990990  528764 cri.go:89] found id: ""
	I1217 20:39:38.991004  528764 logs.go:282] 0 containers: []
	W1217 20:39:38.991012  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:38.991017  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:38.991087  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:39.019624  528764 cri.go:89] found id: ""
	I1217 20:39:39.019638  528764 logs.go:282] 0 containers: []
	W1217 20:39:39.019645  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:39.019651  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:39.019712  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:39.049943  528764 cri.go:89] found id: ""
	I1217 20:39:39.049957  528764 logs.go:282] 0 containers: []
	W1217 20:39:39.049964  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:39.049971  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:39.049982  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:39.114679  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:39.114699  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:39.129526  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:39.129544  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:39.192131  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:39.184273   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.185000   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.186617   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.186938   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.188434   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:39.184273   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.185000   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.186617   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.186938   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.188434   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:39.192141  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:39.192151  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:39.262829  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:39.262849  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:41.796129  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:41.805988  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:41.806050  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:41.830659  528764 cri.go:89] found id: ""
	I1217 20:39:41.830688  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.830696  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:41.830702  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:41.830772  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:41.855846  528764 cri.go:89] found id: ""
	I1217 20:39:41.855861  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.855868  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:41.855874  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:41.855937  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:41.880126  528764 cri.go:89] found id: ""
	I1217 20:39:41.880139  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.880147  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:41.880151  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:41.880205  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:41.909006  528764 cri.go:89] found id: ""
	I1217 20:39:41.909020  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.909027  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:41.909032  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:41.909088  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:41.938559  528764 cri.go:89] found id: ""
	I1217 20:39:41.938573  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.938580  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:41.938585  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:41.938646  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:41.966291  528764 cri.go:89] found id: ""
	I1217 20:39:41.966305  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.966312  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:41.966317  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:41.966380  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:41.991150  528764 cri.go:89] found id: ""
	I1217 20:39:41.991164  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.991172  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:41.991180  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:41.991190  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:42.024918  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:42.024936  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:42.094047  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:42.094069  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:42.113717  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:42.113737  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:42.191163  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:42.180141   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.180682   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.183783   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.184295   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.186218   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:42.180141   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.180682   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.183783   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.184295   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.186218   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:42.191176  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:42.191195  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:44.772767  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:44.783138  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:44.783204  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:44.811282  528764 cri.go:89] found id: ""
	I1217 20:39:44.811296  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.811304  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:44.811309  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:44.811369  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:44.838690  528764 cri.go:89] found id: ""
	I1217 20:39:44.838704  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.838711  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:44.838717  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:44.838776  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:44.866668  528764 cri.go:89] found id: ""
	I1217 20:39:44.866683  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.866690  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:44.866696  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:44.866751  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:44.892383  528764 cri.go:89] found id: ""
	I1217 20:39:44.892397  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.892405  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:44.892410  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:44.892468  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:44.921797  528764 cri.go:89] found id: ""
	I1217 20:39:44.921812  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.921819  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:44.921825  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:44.921885  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:44.947362  528764 cri.go:89] found id: ""
	I1217 20:39:44.947376  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.947384  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:44.947389  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:44.947446  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:44.974284  528764 cri.go:89] found id: ""
	I1217 20:39:44.974297  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.974305  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:44.974312  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:44.974323  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:45.077487  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:45.067021   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.068124   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.068971   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.071090   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.072380   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:45.067021   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.068124   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.068971   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.071090   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.072380   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:45.077499  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:45.077511  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:45.185472  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:45.185499  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:45.244734  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:45.244753  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:45.320383  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:45.320403  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:47.839254  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:47.849450  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:47.849509  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:47.878517  528764 cri.go:89] found id: ""
	I1217 20:39:47.878531  528764 logs.go:282] 0 containers: []
	W1217 20:39:47.878539  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:47.878554  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:47.878612  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:47.904739  528764 cri.go:89] found id: ""
	I1217 20:39:47.904754  528764 logs.go:282] 0 containers: []
	W1217 20:39:47.904762  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:47.904767  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:47.904823  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:47.929572  528764 cri.go:89] found id: ""
	I1217 20:39:47.929586  528764 logs.go:282] 0 containers: []
	W1217 20:39:47.929593  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:47.929599  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:47.929658  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:47.958617  528764 cri.go:89] found id: ""
	I1217 20:39:47.958631  528764 logs.go:282] 0 containers: []
	W1217 20:39:47.958639  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:47.958644  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:47.958701  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:47.984420  528764 cri.go:89] found id: ""
	I1217 20:39:47.984434  528764 logs.go:282] 0 containers: []
	W1217 20:39:47.984441  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:47.984447  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:47.984504  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:48.013373  528764 cri.go:89] found id: ""
	I1217 20:39:48.013389  528764 logs.go:282] 0 containers: []
	W1217 20:39:48.013396  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:48.013402  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:48.013461  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:48.040700  528764 cri.go:89] found id: ""
	I1217 20:39:48.040713  528764 logs.go:282] 0 containers: []
	W1217 20:39:48.040720  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:48.040728  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:48.040740  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:48.112503  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:48.112522  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:48.148498  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:48.148514  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:48.215575  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:48.215644  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:48.230769  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:48.230785  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:48.305622  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:48.297446   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.298244   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.299754   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.300303   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.301821   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:48.297446   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.298244   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.299754   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.300303   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.301821   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:50.807281  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:50.819012  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:50.819075  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:50.845131  528764 cri.go:89] found id: ""
	I1217 20:39:50.845145  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.845153  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:50.845158  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:50.845215  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:50.878758  528764 cri.go:89] found id: ""
	I1217 20:39:50.878771  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.878778  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:50.878783  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:50.878851  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:50.905139  528764 cri.go:89] found id: ""
	I1217 20:39:50.905154  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.905161  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:50.905167  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:50.905234  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:50.930885  528764 cri.go:89] found id: ""
	I1217 20:39:50.930898  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.930923  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:50.930928  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:50.931004  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:50.961249  528764 cri.go:89] found id: ""
	I1217 20:39:50.961264  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.961271  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:50.961281  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:50.961339  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:50.990268  528764 cri.go:89] found id: ""
	I1217 20:39:50.990283  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.990290  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:50.990305  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:50.990368  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:51.022220  528764 cri.go:89] found id: ""
	I1217 20:39:51.022235  528764 logs.go:282] 0 containers: []
	W1217 20:39:51.022253  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:51.022260  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:51.022272  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:51.037279  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:51.037301  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:51.104091  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:51.095357   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.096330   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.098113   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.098463   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.100108   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:51.095357   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.096330   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.098113   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.098463   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.100108   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:51.104101  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:51.104112  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:51.170651  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:51.170674  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:51.200399  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:51.200421  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:53.770767  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:53.780793  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:53.780851  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:53.809348  528764 cri.go:89] found id: ""
	I1217 20:39:53.809362  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.809370  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:53.809375  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:53.809441  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:53.834689  528764 cri.go:89] found id: ""
	I1217 20:39:53.834703  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.834710  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:53.834716  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:53.834772  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:53.861465  528764 cri.go:89] found id: ""
	I1217 20:39:53.861483  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.861491  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:53.861498  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:53.861562  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:53.891732  528764 cri.go:89] found id: ""
	I1217 20:39:53.891747  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.891754  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:53.891759  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:53.891817  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:53.917938  528764 cri.go:89] found id: ""
	I1217 20:39:53.917952  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.917959  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:53.917964  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:53.918024  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:53.943397  528764 cri.go:89] found id: ""
	I1217 20:39:53.943412  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.943420  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:53.943431  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:53.943500  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:53.970499  528764 cri.go:89] found id: ""
	I1217 20:39:53.970514  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.970521  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:53.970529  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:53.970540  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:54.037615  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:54.028803   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.030066   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.030771   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.032113   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.032669   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:54.028803   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.030066   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.030771   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.032113   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.032669   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:54.037625  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:54.037637  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:54.105683  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:54.105702  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:54.135408  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:54.135424  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:54.201915  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:54.201934  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:56.717571  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:56.727576  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:56.727663  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:56.752566  528764 cri.go:89] found id: ""
	I1217 20:39:56.752580  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.752587  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:56.752593  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:56.752649  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:56.778100  528764 cri.go:89] found id: ""
	I1217 20:39:56.778114  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.778123  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:56.778128  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:56.778188  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:56.810564  528764 cri.go:89] found id: ""
	I1217 20:39:56.810578  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.810585  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:56.810590  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:56.810651  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:56.836110  528764 cri.go:89] found id: ""
	I1217 20:39:56.836123  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.836130  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:56.836136  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:56.836192  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:56.860819  528764 cri.go:89] found id: ""
	I1217 20:39:56.860833  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.860840  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:56.860845  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:56.860910  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:56.885378  528764 cri.go:89] found id: ""
	I1217 20:39:56.885392  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.885400  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:56.885405  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:56.885464  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:56.910636  528764 cri.go:89] found id: ""
	I1217 20:39:56.910649  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.910657  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:56.910664  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:56.910685  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:56.975973  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:56.975994  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:56.990897  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:56.990913  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:57.059420  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:57.050613   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.051527   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.053283   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.053833   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.055383   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:57.050613   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.051527   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.053283   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.053833   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.055383   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:57.059434  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:57.059444  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:57.127559  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:57.127588  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:59.660834  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:59.671347  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:59.671409  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:59.697317  528764 cri.go:89] found id: ""
	I1217 20:39:59.697331  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.697338  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:59.697344  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:59.697400  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:59.721571  528764 cri.go:89] found id: ""
	I1217 20:39:59.721586  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.721593  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:59.721601  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:59.721663  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:59.746819  528764 cri.go:89] found id: ""
	I1217 20:39:59.746835  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.746843  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:59.746849  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:59.746909  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:59.773034  528764 cri.go:89] found id: ""
	I1217 20:39:59.773049  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.773057  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:59.773062  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:59.773123  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:59.802418  528764 cri.go:89] found id: ""
	I1217 20:39:59.802441  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.802449  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:59.802454  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:59.802524  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:59.831711  528764 cri.go:89] found id: ""
	I1217 20:39:59.831725  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.831733  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:59.831739  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:59.831804  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:59.856953  528764 cri.go:89] found id: ""
	I1217 20:39:59.856967  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.856975  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:59.856982  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:59.856995  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:59.884897  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:59.884914  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:59.949655  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:59.949677  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:59.964501  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:59.964517  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:40:00.094107  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:40:00.057057   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.058079   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.081049   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.085379   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.086028   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:40:00.057057   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.058079   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.081049   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.085379   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.086028   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:40:00.094120  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:40:00.094132  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:40:02.787739  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:40:02.797830  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:40:02.797894  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:40:02.834082  528764 cri.go:89] found id: ""
	I1217 20:40:02.834096  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.834104  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:40:02.834109  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:40:02.834168  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:40:02.866743  528764 cri.go:89] found id: ""
	I1217 20:40:02.866756  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.866763  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:40:02.866768  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:40:02.866837  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:40:02.895045  528764 cri.go:89] found id: ""
	I1217 20:40:02.895058  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.895066  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:40:02.895071  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:40:02.895126  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:40:02.921557  528764 cri.go:89] found id: ""
	I1217 20:40:02.921570  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.921580  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:40:02.921585  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:40:02.921641  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:40:02.952647  528764 cri.go:89] found id: ""
	I1217 20:40:02.952661  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.952669  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:40:02.952675  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:40:02.952733  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:40:02.983298  528764 cri.go:89] found id: ""
	I1217 20:40:02.983312  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.983319  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:40:02.983325  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:40:02.983389  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:40:03.010550  528764 cri.go:89] found id: ""
	I1217 20:40:03.010565  528764 logs.go:282] 0 containers: []
	W1217 20:40:03.010573  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:40:03.010581  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:40:03.010592  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:40:03.079310  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:40:03.079329  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:40:03.094479  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:40:03.094497  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:40:03.161221  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:40:03.151895   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.152622   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.154348   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.155044   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.156824   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:40:03.151895   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.152622   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.154348   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.155044   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.156824   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:40:03.161231  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:40:03.161242  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:40:03.227816  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:40:03.227835  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:40:05.757487  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:40:05.767711  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:40:05.767773  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:40:05.793946  528764 cri.go:89] found id: ""
	I1217 20:40:05.793960  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.793972  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:40:05.793978  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:40:05.794036  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:40:05.822285  528764 cri.go:89] found id: ""
	I1217 20:40:05.822299  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.822306  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:40:05.822314  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:40:05.822371  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:40:05.850250  528764 cri.go:89] found id: ""
	I1217 20:40:05.850264  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.850271  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:40:05.850277  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:40:05.850335  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:40:05.895396  528764 cri.go:89] found id: ""
	I1217 20:40:05.895410  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.895417  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:40:05.895422  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:40:05.895477  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:40:05.922557  528764 cri.go:89] found id: ""
	I1217 20:40:05.922571  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.922580  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:40:05.922586  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:40:05.922644  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:40:05.948573  528764 cri.go:89] found id: ""
	I1217 20:40:05.948586  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.948594  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:40:05.948599  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:40:05.948655  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:40:05.975477  528764 cri.go:89] found id: ""
	I1217 20:40:05.975492  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.975499  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:40:05.975507  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:40:05.975518  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:40:06.041819  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:40:06.041840  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:40:06.056861  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:40:06.056877  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:40:06.121776  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:40:06.113002   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.113953   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.115560   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.115989   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.117538   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:40:06.113002   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.113953   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.115560   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.115989   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.117538   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:40:06.121787  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:40:06.121799  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:40:06.189149  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:40:06.189168  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:40:08.726723  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:40:08.736543  528764 kubeadm.go:602] duration metric: took 4m2.922502769s to restartPrimaryControlPlane
	W1217 20:40:08.736595  528764 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 20:40:08.736673  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 20:40:09.144455  528764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:40:09.157270  528764 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:40:09.165045  528764 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:40:09.165097  528764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:40:09.172944  528764 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:40:09.172955  528764 kubeadm.go:158] found existing configuration files:
	
	I1217 20:40:09.173008  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 20:40:09.180768  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:40:09.180823  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:40:09.188593  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 20:40:09.196627  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:40:09.196696  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:40:09.204027  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 20:40:09.211590  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:40:09.211645  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:40:09.219300  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 20:40:09.227194  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:40:09.227262  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:40:09.234747  528764 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:40:09.272070  528764 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 20:40:09.272212  528764 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:40:09.341132  528764 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:40:09.341223  528764 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 20:40:09.341264  528764 kubeadm.go:319] OS: Linux
	I1217 20:40:09.341317  528764 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:40:09.341383  528764 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 20:40:09.341441  528764 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:40:09.341494  528764 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:40:09.341544  528764 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:40:09.341595  528764 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:40:09.341642  528764 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:40:09.341697  528764 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:40:09.341746  528764 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 20:40:09.410099  528764 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:40:09.410202  528764 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:40:09.410291  528764 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:40:09.420776  528764 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:40:09.424281  528764 out.go:252]   - Generating certificates and keys ...
	I1217 20:40:09.424384  528764 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:40:09.424470  528764 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:40:09.424574  528764 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 20:40:09.424647  528764 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 20:40:09.424730  528764 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 20:40:09.424800  528764 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 20:40:09.424875  528764 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 20:40:09.424947  528764 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 20:40:09.425042  528764 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 20:40:09.425124  528764 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 20:40:09.425164  528764 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 20:40:09.425224  528764 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:40:09.510914  528764 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:40:09.769116  528764 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:40:10.300117  528764 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:40:10.525653  528764 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:40:10.613609  528764 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:40:10.614221  528764 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:40:10.616799  528764 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 20:40:10.619993  528764 out.go:252]   - Booting up control plane ...
	I1217 20:40:10.620096  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:40:10.620217  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:40:10.620290  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:40:10.635322  528764 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:40:10.635439  528764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:40:10.644820  528764 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:40:10.645930  528764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:40:10.645984  528764 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:40:10.779996  528764 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:40:10.780110  528764 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:44:10.781176  528764 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001248714s
	I1217 20:44:10.781203  528764 kubeadm.go:319] 
	I1217 20:44:10.781260  528764 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 20:44:10.781303  528764 kubeadm.go:319] 	- The kubelet is not running
	I1217 20:44:10.781406  528764 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 20:44:10.781411  528764 kubeadm.go:319] 
	I1217 20:44:10.781555  528764 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 20:44:10.781602  528764 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 20:44:10.781633  528764 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 20:44:10.781637  528764 kubeadm.go:319] 
	I1217 20:44:10.786300  528764 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 20:44:10.786712  528764 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 20:44:10.786818  528764 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 20:44:10.787052  528764 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 20:44:10.787056  528764 kubeadm.go:319] 
	I1217 20:44:10.787124  528764 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 20:44:10.787237  528764 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001248714s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 20:44:10.787339  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 20:44:11.201167  528764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:44:11.214381  528764 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:44:11.214439  528764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:44:11.222598  528764 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:44:11.222610  528764 kubeadm.go:158] found existing configuration files:
	
	I1217 20:44:11.222661  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 20:44:11.230419  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:44:11.230478  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:44:11.238159  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 20:44:11.246406  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:44:11.246462  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:44:11.254307  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 20:44:11.262104  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:44:11.262159  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:44:11.270202  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 20:44:11.278439  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:44:11.278497  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:44:11.286143  528764 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:44:11.330597  528764 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 20:44:11.330648  528764 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:44:11.407432  528764 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:44:11.407494  528764 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 20:44:11.407526  528764 kubeadm.go:319] OS: Linux
	I1217 20:44:11.407568  528764 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:44:11.407631  528764 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 20:44:11.407675  528764 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:44:11.407720  528764 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:44:11.407764  528764 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:44:11.407809  528764 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:44:11.407851  528764 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:44:11.407896  528764 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:44:11.407938  528764 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 20:44:11.479750  528764 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:44:11.479854  528764 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:44:11.479945  528764 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:44:11.492072  528764 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:44:11.494989  528764 out.go:252]   - Generating certificates and keys ...
	I1217 20:44:11.495078  528764 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:44:11.495152  528764 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:44:11.495231  528764 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 20:44:11.495312  528764 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 20:44:11.495394  528764 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 20:44:11.495452  528764 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 20:44:11.495526  528764 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 20:44:11.495616  528764 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 20:44:11.495700  528764 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 20:44:11.495778  528764 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 20:44:11.495818  528764 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 20:44:11.495877  528764 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:44:11.718879  528764 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:44:11.913718  528764 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:44:12.104953  528764 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:44:12.214740  528764 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:44:13.078100  528764 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:44:13.078681  528764 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:44:13.081470  528764 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 20:44:13.086841  528764 out.go:252]   - Booting up control plane ...
	I1217 20:44:13.086964  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:44:13.087047  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:44:13.087115  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:44:13.101223  528764 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:44:13.101325  528764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:44:13.108618  528764 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:44:13.108874  528764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:44:13.109039  528764 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:44:13.243147  528764 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:44:13.243267  528764 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:48:13.243345  528764 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000238438s
	I1217 20:48:13.243376  528764 kubeadm.go:319] 
	I1217 20:48:13.243430  528764 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 20:48:13.243460  528764 kubeadm.go:319] 	- The kubelet is not running
	I1217 20:48:13.243558  528764 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 20:48:13.243562  528764 kubeadm.go:319] 
	I1217 20:48:13.243678  528764 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 20:48:13.243708  528764 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 20:48:13.243736  528764 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 20:48:13.243739  528764 kubeadm.go:319] 
	I1217 20:48:13.247539  528764 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 20:48:13.247985  528764 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 20:48:13.248095  528764 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 20:48:13.248338  528764 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 20:48:13.248343  528764 kubeadm.go:319] 
	I1217 20:48:13.248416  528764 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 20:48:13.248469  528764 kubeadm.go:403] duration metric: took 12m7.468824114s to StartCluster
	I1217 20:48:13.248499  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:48:13.248560  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:48:13.273652  528764 cri.go:89] found id: ""
	I1217 20:48:13.273665  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.273672  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:48:13.273677  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:48:13.273743  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:48:13.299758  528764 cri.go:89] found id: ""
	I1217 20:48:13.299773  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.299780  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:48:13.299787  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:48:13.299849  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:48:13.331514  528764 cri.go:89] found id: ""
	I1217 20:48:13.331527  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.331534  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:48:13.331538  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:48:13.331632  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:48:13.361494  528764 cri.go:89] found id: ""
	I1217 20:48:13.361508  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.361515  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:48:13.361520  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:48:13.361583  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:48:13.392361  528764 cri.go:89] found id: ""
	I1217 20:48:13.392374  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.392382  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:48:13.392387  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:48:13.392445  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:48:13.420567  528764 cri.go:89] found id: ""
	I1217 20:48:13.420581  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.420589  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:48:13.420594  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:48:13.420652  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:48:13.446072  528764 cri.go:89] found id: ""
	I1217 20:48:13.446086  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.446093  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:48:13.446102  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:48:13.446112  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:48:13.512293  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:48:13.512314  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:48:13.527934  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:48:13.527951  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:48:13.596728  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:48:13.587277   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.587931   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.589795   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.590404   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.592261   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:48:13.587277   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.587931   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.589795   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.590404   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.592261   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:48:13.596751  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:48:13.596762  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:48:13.666834  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:48:13.666852  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 20:48:13.697763  528764 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238438s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 20:48:13.697796  528764 out.go:285] * 
	W1217 20:48:13.697859  528764 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238438s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 20:48:13.697876  528764 out.go:285] * 
	W1217 20:48:13.700016  528764 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:48:13.704929  528764 out.go:203] 
	W1217 20:48:13.708733  528764 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238438s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 20:48:13.708785  528764 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 20:48:13.708804  528764 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 20:48:13.713576  528764 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496553819Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496588913Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496641484Z" level=info msg="Create NRI interface"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496756307Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496765161Z" level=info msg="runtime interface created"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496787586Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496795537Z" level=info msg="runtime interface starting up..."
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496804792Z" level=info msg="starting plugins..."
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496818503Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496896764Z" level=info msg="No systemd watchdog enabled"
	Dec 17 20:36:04 functional-655452 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.415834383Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=58f6f0f1-488b-4240-a679-3e157f00d7e7 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.416590837Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=05b425cc-49a9-416d-8e00-62945047df36 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.417323538Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=a9a38e6d-b290-413f-a93f-cf194783972f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.417962945Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=bdf79a37-e5ac-441d-baa9-990efb2af86f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.418404377Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=f29ade00-2b87-48af-a8d1-af1f70d12fc1 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.418943992Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=aa01ccac-5dc1-42c2-9b96-b5307aedf908 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.419435131Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=3071c5cb-d2e8-40e4-bf26-10cfdb83c6ca name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.483168755Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=116885b2-e96e-48a5-8c7d-749c0bd3c872 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.484179432Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=a7b99d88-fbbf-4485-ad77-1f09bb11e283 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.484714555Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=1a3a48a9-47e1-4681-9a10-70d7c5e85de2 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.48529777Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=48ecbe50-05dc-4736-8a4c-23a7b8f0b752 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.485817657Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=13bf3d26-ab2e-4773-bb7e-3fc288ba3714 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.486350122Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=3ebf0c9f-0c46-4d67-8924-03dd39ad4399 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.486847969Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=e3deb8c8-e04b-4949-9c80-5a8e5a9b5bee name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:48:17.161354   21462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:17.162222   21462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:17.163965   21462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:17.164544   21462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:17.166159   21462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 19:02] hrtimer: interrupt took 71042327 ns
	[Dec17 20:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 20:12] overlayfs: idmapped layers are currently not supported
	[  +0.080288] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec17 20:17] overlayfs: idmapped layers are currently not supported
	[Dec17 20:18] overlayfs: idmapped layers are currently not supported
	[Dec17 20:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:48:17 up  3:30,  0 user,  load average: 0.28, 0.24, 0.54
	Linux functional-655452 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 20:48:14 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:48:15 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 964.
	Dec 17 20:48:15 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:48:15 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:48:15 functional-655452 kubelet[21336]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:48:15 functional-655452 kubelet[21336]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:48:15 functional-655452 kubelet[21336]: E1217 20:48:15.643509   21336 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:48:15 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:48:15 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:48:16 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 965.
	Dec 17 20:48:16 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:48:16 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:48:16 functional-655452 kubelet[21371]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:48:16 functional-655452 kubelet[21371]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:48:16 functional-655452 kubelet[21371]: E1217 20:48:16.397602   21371 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:48:16 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:48:16 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:48:17 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 966.
	Dec 17 20:48:17 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:48:17 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:48:17 functional-655452 kubelet[21455]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:48:17 functional-655452 kubelet[21455]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:48:17 functional-655452 kubelet[21455]: E1217 20:48:17.132889   21455 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:48:17 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:48:17 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452: exit status 2 (393.797178ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-655452" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (2.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-655452 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-655452 apply -f testdata/invalidsvc.yaml: exit status 1 (63.311193ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-655452 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (1.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-655452 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-655452 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-655452 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-655452 --alsologtostderr -v=1] stderr:
I1217 20:50:14.671028  546076 out.go:360] Setting OutFile to fd 1 ...
I1217 20:50:14.671216  546076 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:50:14.671229  546076 out.go:374] Setting ErrFile to fd 2...
I1217 20:50:14.671235  546076 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:50:14.671497  546076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
I1217 20:50:14.671795  546076 mustload.go:66] Loading cluster: functional-655452
I1217 20:50:14.672248  546076 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 20:50:14.672736  546076 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
I1217 20:50:14.691573  546076 host.go:66] Checking if "functional-655452" exists ...
I1217 20:50:14.691941  546076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1217 20:50:14.749383  546076 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 20:50:14.7403891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1217 20:50:14.749516  546076 api_server.go:166] Checking apiserver status ...
I1217 20:50:14.749589  546076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1217 20:50:14.749632  546076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
I1217 20:50:14.767625  546076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
W1217 20:50:14.867554  546076 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1217 20:50:14.870846  546076 out.go:179] * The control-plane node functional-655452 apiserver is not running: (state=Stopped)
I1217 20:50:14.873934  546076 out.go:179]   To start a cluster, run: "minikube start -p functional-655452"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-655452
helpers_test.go:244: (dbg) docker inspect functional-655452:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	        "Created": "2025-12-17T20:21:23.127111799Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 517339,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:21:23.205943598Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hosts",
	        "LogPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f-json.log",
	        "Name": "/functional-655452",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-655452:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-655452",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	                "LowerDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8-init/diff:/var/lib/docker/overlay2/c4759b344ecb109c83d66e4ef56c76903f1d1e597efb86ab2d2c7911b1130a8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-655452",
	                "Source": "/var/lib/docker/volumes/functional-655452/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-655452",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-655452",
	                "name.minikube.sigs.k8s.io": "functional-655452",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "77ae7dbcf69b3201be4ce65a88d235dbad078142318e01a5939425f9766fb924",
	            "SandboxKey": "/var/run/docker/netns/77ae7dbcf69b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-655452": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:c6:98:b5:58:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b68cd6e69ccdb81b0870dd976e2d5401ca696480842d2c469293121d59435cfb",
	                    "EndpointID": "82926bb4b08b5f66131c1026a8bc72a582e9cb8c6d97a278a494965923f47cb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-655452",
	                        "ce7a6a54d14f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452: exit status 2 (319.047513ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-655452 service hello-node --url                                                                                                         │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:49 UTC │                     │
	│ ssh       │ functional-655452 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ mount     │ -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun878729531/001:/mount-9p --alsologtostderr -v=1              │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ ssh       │ functional-655452 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh       │ functional-655452 ssh -- ls -la /mount-9p                                                                                                          │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh       │ functional-655452 ssh cat /mount-9p/test-1766004604061879051                                                                                       │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh       │ functional-655452 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                   │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ ssh       │ functional-655452 ssh sudo umount -f /mount-9p                                                                                                     │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ mount     │ -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun191240204/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ ssh       │ functional-655452 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ ssh       │ functional-655452 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh       │ functional-655452 ssh -- ls -la /mount-9p                                                                                                          │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh       │ functional-655452 ssh sudo umount -f /mount-9p                                                                                                     │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ mount     │ -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun309516247/001:/mount1 --alsologtostderr -v=1                │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ mount     │ -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun309516247/001:/mount2 --alsologtostderr -v=1                │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ mount     │ -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun309516247/001:/mount3 --alsologtostderr -v=1                │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ ssh       │ functional-655452 ssh findmnt -T /mount1                                                                                                           │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ ssh       │ functional-655452 ssh findmnt -T /mount1                                                                                                           │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh       │ functional-655452 ssh findmnt -T /mount2                                                                                                           │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh       │ functional-655452 ssh findmnt -T /mount3                                                                                                           │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ mount     │ -p functional-655452 --kill=true                                                                                                                   │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ start     │ -p functional-655452 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1        │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ start     │ -p functional-655452 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1        │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ start     │ -p functional-655452 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                  │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-655452 --alsologtostderr -v=1                                                                                     │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	└───────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:50:14
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:50:14.416108  546005 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:50:14.416303  546005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:50:14.416337  546005 out.go:374] Setting ErrFile to fd 2...
	I1217 20:50:14.416356  546005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:50:14.416664  546005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:50:14.417124  546005 out.go:368] Setting JSON to false
	I1217 20:50:14.418092  546005 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12764,"bootTime":1765991851,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:50:14.418210  546005 start.go:143] virtualization:  
	I1217 20:50:14.421680  546005 out.go:179] * [functional-655452] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:50:14.424709  546005 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:50:14.424790  546005 notify.go:221] Checking for updates...
	I1217 20:50:14.430624  546005 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:50:14.433538  546005 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:50:14.436475  546005 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:50:14.439543  546005 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:50:14.442511  546005 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:50:14.445934  546005 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:50:14.446540  546005 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:50:14.472609  546005 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:50:14.472735  546005 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:50:14.533624  546005 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 20:50:14.524341367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:50:14.533733  546005 docker.go:319] overlay module found
	I1217 20:50:14.536881  546005 out.go:179] * Using the docker driver based on existing profile
	I1217 20:50:14.539750  546005 start.go:309] selected driver: docker
	I1217 20:50:14.539770  546005 start.go:927] validating driver "docker" against &{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:50:14.539870  546005 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:50:14.539971  546005 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:50:14.610390  546005 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 20:50:14.601163794 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:50:14.610824  546005 cni.go:84] Creating CNI manager for ""
	I1217 20:50:14.610891  546005 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:50:14.610937  546005 start.go:353] cluster config:
	{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:50:14.614124  546005 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496553819Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496588913Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496641484Z" level=info msg="Create NRI interface"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496756307Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496765161Z" level=info msg="runtime interface created"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496787586Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496795537Z" level=info msg="runtime interface starting up..."
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496804792Z" level=info msg="starting plugins..."
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496818503Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496896764Z" level=info msg="No systemd watchdog enabled"
	Dec 17 20:36:04 functional-655452 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.415834383Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=58f6f0f1-488b-4240-a679-3e157f00d7e7 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.416590837Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=05b425cc-49a9-416d-8e00-62945047df36 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.417323538Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=a9a38e6d-b290-413f-a93f-cf194783972f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.417962945Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=bdf79a37-e5ac-441d-baa9-990efb2af86f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.418404377Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=f29ade00-2b87-48af-a8d1-af1f70d12fc1 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.418943992Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=aa01ccac-5dc1-42c2-9b96-b5307aedf908 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.419435131Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=3071c5cb-d2e8-40e4-bf26-10cfdb83c6ca name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.483168755Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=116885b2-e96e-48a5-8c7d-749c0bd3c872 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.484179432Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=a7b99d88-fbbf-4485-ad77-1f09bb11e283 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.484714555Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=1a3a48a9-47e1-4681-9a10-70d7c5e85de2 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.48529777Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=48ecbe50-05dc-4736-8a4c-23a7b8f0b752 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.485817657Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=13bf3d26-ab2e-4773-bb7e-3fc288ba3714 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.486350122Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=3ebf0c9f-0c46-4d67-8924-03dd39ad4399 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.486847969Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=e3deb8c8-e04b-4949-9c80-5a8e5a9b5bee name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:50:15.920664   23479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:50:15.921176   23479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:50:15.922860   23479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:50:15.923170   23479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:50:15.924724   23479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 19:02] hrtimer: interrupt took 71042327 ns
	[Dec17 20:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 20:12] overlayfs: idmapped layers are currently not supported
	[  +0.080288] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec17 20:17] overlayfs: idmapped layers are currently not supported
	[Dec17 20:18] overlayfs: idmapped layers are currently not supported
	[Dec17 20:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:50:15 up  3:32,  0 user,  load average: 0.96, 0.38, 0.55
	Linux functional-655452 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 20:50:13 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:50:14 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1122.
	Dec 17 20:50:14 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:50:14 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:50:14 functional-655452 kubelet[23359]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:50:14 functional-655452 kubelet[23359]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:50:14 functional-655452 kubelet[23359]: E1217 20:50:14.136206   23359 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:50:14 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:50:14 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:50:14 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1123.
	Dec 17 20:50:14 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:50:14 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:50:14 functional-655452 kubelet[23371]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:50:14 functional-655452 kubelet[23371]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:50:14 functional-655452 kubelet[23371]: E1217 20:50:14.876509   23371 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:50:14 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:50:14 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:50:15 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1124.
	Dec 17 20:50:15 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:50:15 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:50:15 functional-655452 kubelet[23401]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:50:15 functional-655452 kubelet[23401]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:50:15 functional-655452 kubelet[23401]: E1217 20:50:15.623361   23401 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:50:15 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:50:15 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452: exit status 2 (334.441128ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-655452" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (1.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (3.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-655452 status: exit status 2 (298.079818ms)

                                                
                                                
-- stdout --
	functional-655452
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-655452 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-655452 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (298.150646ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-655452 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-655452 status -o json: exit status 2 (335.174795ms)

                                                
                                                
-- stdout --
	{"Name":"functional-655452","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-655452 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-655452
helpers_test.go:244: (dbg) docker inspect functional-655452:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	        "Created": "2025-12-17T20:21:23.127111799Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 517339,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:21:23.205943598Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hosts",
	        "LogPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f-json.log",
	        "Name": "/functional-655452",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-655452:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-655452",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	                "LowerDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8-init/diff:/var/lib/docker/overlay2/c4759b344ecb109c83d66e4ef56c76903f1d1e597efb86ab2d2c7911b1130a8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-655452",
	                "Source": "/var/lib/docker/volumes/functional-655452/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-655452",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-655452",
	                "name.minikube.sigs.k8s.io": "functional-655452",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "77ae7dbcf69b3201be4ce65a88d235dbad078142318e01a5939425f9766fb924",
	            "SandboxKey": "/var/run/docker/netns/77ae7dbcf69b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-655452": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:c6:98:b5:58:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b68cd6e69ccdb81b0870dd976e2d5401ca696480842d2c469293121d59435cfb",
	                    "EndpointID": "82926bb4b08b5f66131c1026a8bc72a582e9cb8c6d97a278a494965923f47cb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-655452",
	                        "ce7a6a54d14f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452: exit status 2 (294.932637ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service │ functional-655452 service list                                                                                                                     │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:49 UTC │                     │
	│ service │ functional-655452 service list -o json                                                                                                             │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:49 UTC │                     │
	│ service │ functional-655452 service --namespace=default --https --url hello-node                                                                             │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:49 UTC │                     │
	│ service │ functional-655452 service hello-node --url --format={{.IP}}                                                                                        │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:49 UTC │                     │
	│ service │ functional-655452 service hello-node --url                                                                                                         │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:49 UTC │                     │
	│ ssh     │ functional-655452 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ mount   │ -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun878729531/001:/mount-9p --alsologtostderr -v=1              │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ ssh     │ functional-655452 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh     │ functional-655452 ssh -- ls -la /mount-9p                                                                                                          │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh     │ functional-655452 ssh cat /mount-9p/test-1766004604061879051                                                                                       │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh     │ functional-655452 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                   │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ ssh     │ functional-655452 ssh sudo umount -f /mount-9p                                                                                                     │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ mount   │ -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun191240204/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ ssh     │ functional-655452 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ ssh     │ functional-655452 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh     │ functional-655452 ssh -- ls -la /mount-9p                                                                                                          │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh     │ functional-655452 ssh sudo umount -f /mount-9p                                                                                                     │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ mount   │ -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun309516247/001:/mount1 --alsologtostderr -v=1                │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ mount   │ -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun309516247/001:/mount2 --alsologtostderr -v=1                │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ mount   │ -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun309516247/001:/mount3 --alsologtostderr -v=1                │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ ssh     │ functional-655452 ssh findmnt -T /mount1                                                                                                           │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ ssh     │ functional-655452 ssh findmnt -T /mount1                                                                                                           │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh     │ functional-655452 ssh findmnt -T /mount2                                                                                                           │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh     │ functional-655452 ssh findmnt -T /mount3                                                                                                           │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ mount   │ -p functional-655452 --kill=true                                                                                                                   │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:36:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:36:01.304180  528764 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:36:01.304299  528764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:36:01.304303  528764 out.go:374] Setting ErrFile to fd 2...
	I1217 20:36:01.304307  528764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:36:01.304548  528764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:36:01.304941  528764 out.go:368] Setting JSON to false
	I1217 20:36:01.305793  528764 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11911,"bootTime":1765991851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:36:01.305860  528764 start.go:143] virtualization:  
	I1217 20:36:01.309940  528764 out.go:179] * [functional-655452] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:36:01.313178  528764 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:36:01.313261  528764 notify.go:221] Checking for updates...
	I1217 20:36:01.319276  528764 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:36:01.322533  528764 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:36:01.325481  528764 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:36:01.328332  528764 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:36:01.331257  528764 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:36:01.334638  528764 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:36:01.334735  528764 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:36:01.377324  528764 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:36:01.377436  528764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:36:01.442821  528764 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 20:36:01.432767342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:36:01.442911  528764 docker.go:319] overlay module found
	I1217 20:36:01.446093  528764 out.go:179] * Using the docker driver based on existing profile
	I1217 20:36:01.448835  528764 start.go:309] selected driver: docker
	I1217 20:36:01.448847  528764 start.go:927] validating driver "docker" against &{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:36:01.448948  528764 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:36:01.449055  528764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:36:01.502893  528764 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 20:36:01.493096577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:36:01.503296  528764 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:36:01.503325  528764 cni.go:84] Creating CNI manager for ""
	I1217 20:36:01.503373  528764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:36:01.503423  528764 start.go:353] cluster config:
	{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:36:01.506646  528764 out.go:179] * Starting "functional-655452" primary control-plane node in "functional-655452" cluster
	I1217 20:36:01.509580  528764 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:36:01.512594  528764 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:36:01.515481  528764 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:36:01.515521  528764 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1217 20:36:01.515533  528764 cache.go:65] Caching tarball of preloaded images
	I1217 20:36:01.515555  528764 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:36:01.515635  528764 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:36:01.515645  528764 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 20:36:01.515757  528764 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/config.json ...
	I1217 20:36:01.536964  528764 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:36:01.536994  528764 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:36:01.537012  528764 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:36:01.537046  528764 start.go:360] acquireMachinesLock for functional-655452: {Name:mk8b480106227149e1b79da3c4f580b9dc5e0f96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:36:01.537100  528764 start.go:364] duration metric: took 37.99µs to acquireMachinesLock for "functional-655452"
	I1217 20:36:01.537118  528764 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:36:01.537122  528764 fix.go:54] fixHost starting: 
	I1217 20:36:01.537383  528764 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:36:01.554557  528764 fix.go:112] recreateIfNeeded on functional-655452: state=Running err=<nil>
	W1217 20:36:01.554578  528764 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:36:01.557934  528764 out.go:252] * Updating the running docker "functional-655452" container ...
	I1217 20:36:01.557966  528764 machine.go:94] provisionDockerMachine start ...
	I1217 20:36:01.558073  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:01.576191  528764 main.go:143] libmachine: Using SSH client type: native
	I1217 20:36:01.576509  528764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:36:01.576515  528764 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:36:01.707478  528764 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-655452
	
	I1217 20:36:01.707493  528764 ubuntu.go:182] provisioning hostname "functional-655452"
	I1217 20:36:01.707564  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:01.725762  528764 main.go:143] libmachine: Using SSH client type: native
	I1217 20:36:01.726063  528764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:36:01.726071  528764 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-655452 && echo "functional-655452" | sudo tee /etc/hostname
	I1217 20:36:01.865176  528764 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-655452
	
	I1217 20:36:01.865255  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:01.884852  528764 main.go:143] libmachine: Using SSH client type: native
	I1217 20:36:01.885159  528764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:36:01.885174  528764 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-655452' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-655452/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-655452' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:36:02.016339  528764 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:36:02.016355  528764 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:36:02.016378  528764 ubuntu.go:190] setting up certificates
	I1217 20:36:02.016388  528764 provision.go:84] configureAuth start
	I1217 20:36:02.016451  528764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-655452
	I1217 20:36:02.035106  528764 provision.go:143] copyHostCerts
	I1217 20:36:02.035175  528764 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:36:02.035183  528764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:36:02.035257  528764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:36:02.035375  528764 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:36:02.035379  528764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:36:02.035406  528764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:36:02.035470  528764 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:36:02.035473  528764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:36:02.035496  528764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:36:02.035545  528764 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.functional-655452 san=[127.0.0.1 192.168.49.2 functional-655452 localhost minikube]
	I1217 20:36:02.115164  528764 provision.go:177] copyRemoteCerts
	I1217 20:36:02.115221  528764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:36:02.115260  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:02.139076  528764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:36:02.235601  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:36:02.254294  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 20:36:02.272604  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 20:36:02.290727  528764 provision.go:87] duration metric: took 274.326255ms to configureAuth
	I1217 20:36:02.290752  528764 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:36:02.291001  528764 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:36:02.291105  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:02.309578  528764 main.go:143] libmachine: Using SSH client type: native
	I1217 20:36:02.309891  528764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:36:02.309902  528764 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:36:02.644802  528764 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:36:02.644817  528764 machine.go:97] duration metric: took 1.086843683s to provisionDockerMachine
	I1217 20:36:02.644827  528764 start.go:293] postStartSetup for "functional-655452" (driver="docker")
	I1217 20:36:02.644838  528764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:36:02.644899  528764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:36:02.644944  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:02.663334  528764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:36:02.759464  528764 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:36:02.762934  528764 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:36:02.762952  528764 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:36:02.762970  528764 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:36:02.763029  528764 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:36:02.763103  528764 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:36:02.763175  528764 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts -> hosts in /etc/test/nested/copy/488412
	I1217 20:36:02.763216  528764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/488412
	I1217 20:36:02.770652  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:36:02.788458  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts --> /etc/test/nested/copy/488412/hosts (40 bytes)
	I1217 20:36:02.805971  528764 start.go:296] duration metric: took 161.129975ms for postStartSetup
	I1217 20:36:02.806055  528764 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:36:02.806105  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:02.832327  528764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:36:02.932517  528764 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:36:02.937022  528764 fix.go:56] duration metric: took 1.399892436s for fixHost
	I1217 20:36:02.937037  528764 start.go:83] releasing machines lock for "functional-655452", held for 1.399929845s
	I1217 20:36:02.937101  528764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-655452
	I1217 20:36:02.954767  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:36:02.954820  528764 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:36:02.954828  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:36:02.954855  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:36:02.954880  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:36:02.954903  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:36:02.954966  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:36:02.955032  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:36:02.955078  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:02.972629  528764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:36:03.082963  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:36:03.101544  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:36:03.119807  528764 ssh_runner.go:195] Run: openssl version
	I1217 20:36:03.126345  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:36:03.134006  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:36:03.141755  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:36:03.145627  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:36:03.145694  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:36:03.186918  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:36:03.196074  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:36:03.205007  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:36:03.212820  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:36:03.216798  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:36:03.216865  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:36:03.260241  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:36:03.268200  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:03.275663  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:36:03.283259  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:03.287077  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:03.287187  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:03.328526  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:36:03.336152  528764 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:36:03.339768  528764 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 20:36:03.343092  528764 ssh_runner.go:195] Run: cat /version.json
	I1217 20:36:03.343166  528764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:36:03.444762  528764 ssh_runner.go:195] Run: systemctl --version
	I1217 20:36:03.450992  528764 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:36:03.489251  528764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:36:03.493525  528764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:36:03.493594  528764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:36:03.501380  528764 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:36:03.501400  528764 start.go:496] detecting cgroup driver to use...
	I1217 20:36:03.501430  528764 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:36:03.501474  528764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:36:03.519927  528764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:36:03.535865  528764 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:36:03.535924  528764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:36:03.553665  528764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:36:03.568077  528764 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:36:03.688788  528764 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:36:03.816391  528764 docker.go:234] disabling docker service ...
	I1217 20:36:03.816445  528764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:36:03.832743  528764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:36:03.846562  528764 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:36:03.965969  528764 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:36:04.109607  528764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:36:04.122680  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:36:04.137683  528764 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:36:04.137752  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.147364  528764 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:36:04.147423  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.157452  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.166810  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.176014  528764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:36:04.184171  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.192938  528764 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.201542  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.210110  528764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:36:04.217743  528764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:36:04.225321  528764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:36:04.332263  528764 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:36:04.503245  528764 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:36:04.503305  528764 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:36:04.508393  528764 start.go:564] Will wait 60s for crictl version
	I1217 20:36:04.508461  528764 ssh_runner.go:195] Run: which crictl
	I1217 20:36:04.512401  528764 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:36:04.541968  528764 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:36:04.542059  528764 ssh_runner.go:195] Run: crio --version
	I1217 20:36:04.568941  528764 ssh_runner.go:195] Run: crio --version
	I1217 20:36:04.602248  528764 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 20:36:04.604894  528764 cli_runner.go:164] Run: docker network inspect functional-655452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:36:04.620832  528764 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:36:04.627460  528764 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 20:36:04.630066  528764 kubeadm.go:884] updating cluster {Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:36:04.630187  528764 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:36:04.630246  528764 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:36:04.668067  528764 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:36:04.668079  528764 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:36:04.668136  528764 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:36:04.698017  528764 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:36:04.698030  528764 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:36:04.698036  528764 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1217 20:36:04.698140  528764 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-655452 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:36:04.698216  528764 ssh_runner.go:195] Run: crio config
	I1217 20:36:04.769162  528764 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 20:36:04.769193  528764 cni.go:84] Creating CNI manager for ""
	I1217 20:36:04.769200  528764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:36:04.769208  528764 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:36:04.769233  528764 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-655452 NodeName:functional-655452 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:36:04.769373  528764 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-655452"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:36:04.769444  528764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 20:36:04.777167  528764 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:36:04.777239  528764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:36:04.784566  528764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 20:36:04.797984  528764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 20:36:04.810563  528764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I1217 20:36:04.823513  528764 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:36:04.827291  528764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:36:04.950251  528764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:36:05.072220  528764 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452 for IP: 192.168.49.2
	I1217 20:36:05.072231  528764 certs.go:195] generating shared ca certs ...
	I1217 20:36:05.072245  528764 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:36:05.072401  528764 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:36:05.072442  528764 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:36:05.072448  528764 certs.go:257] generating profile certs ...
	I1217 20:36:05.072540  528764 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key
	I1217 20:36:05.072591  528764 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key.aa95dda5
	I1217 20:36:05.072629  528764 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key
	I1217 20:36:05.072739  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:36:05.072768  528764 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:36:05.072780  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:36:05.072805  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:36:05.072827  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:36:05.072848  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:36:05.072891  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:36:05.073535  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:36:05.100676  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:36:05.124485  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:36:05.145313  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:36:05.166267  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 20:36:05.185043  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 20:36:05.202568  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:36:05.220530  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:36:05.238845  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:36:05.257230  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:36:05.275490  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:36:05.293936  528764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:36:05.307062  528764 ssh_runner.go:195] Run: openssl version
	I1217 20:36:05.314048  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:36:05.321882  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:36:05.329752  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:36:05.333743  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:36:05.333820  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:36:05.375575  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:36:05.383326  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:36:05.390831  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:36:05.398670  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:36:05.402451  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:36:05.402506  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:36:05.445761  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:36:05.453165  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:05.460611  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:36:05.468452  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:05.472228  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:05.472283  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:05.513950  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:36:05.521563  528764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:36:05.525764  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:36:05.567120  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:36:05.608840  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:36:05.649788  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:36:05.692741  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:36:05.738724  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:36:05.779654  528764 kubeadm.go:401] StartCluster: {Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:36:05.779744  528764 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:36:05.779806  528764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:36:05.806396  528764 cri.go:89] found id: ""
	I1217 20:36:05.806453  528764 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:36:05.814019  528764 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 20:36:05.814027  528764 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 20:36:05.814076  528764 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 20:36:05.823754  528764 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:36:05.824259  528764 kubeconfig.go:125] found "functional-655452" server: "https://192.168.49.2:8441"
	I1217 20:36:05.825529  528764 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 20:36:05.834629  528764 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 20:21:29.177912325 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 20:36:04.817890668 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 20:36:05.834639  528764 kubeadm.go:1161] stopping kube-system containers ...
	I1217 20:36:05.834650  528764 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1217 20:36:05.834705  528764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:36:05.867919  528764 cri.go:89] found id: ""
	I1217 20:36:05.867989  528764 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 20:36:05.885438  528764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:36:05.893366  528764 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 17 20:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 17 20:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 17 20:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 17 20:25 /etc/kubernetes/scheduler.conf
	
	I1217 20:36:05.893420  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 20:36:05.901137  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 20:36:05.909490  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:36:05.909550  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:36:05.916910  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 20:36:05.924811  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:36:05.924869  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:36:05.932331  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 20:36:05.940039  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:36:05.940108  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:36:05.947225  528764 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:36:05.955062  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:36:06.001485  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:36:07.569758  528764 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.568246795s)
	I1217 20:36:07.569817  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:36:07.780039  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:36:07.827231  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:36:07.887398  528764 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:36:07.887476  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:08.388398  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:08.888310  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:09.388248  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:09.887698  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:10.387671  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:10.887697  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:11.387734  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:11.888366  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:12.388180  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:12.888379  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:13.387943  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:13.887667  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:14.388477  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:14.888341  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:15.388247  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:15.888425  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:16.388580  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:16.888356  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:17.387968  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:17.888549  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:18.388370  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:18.887715  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:19.387565  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:19.887775  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:20.388470  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:20.888348  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:21.388333  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:21.888012  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:22.387716  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:22.887746  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:23.388395  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:23.887695  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:24.387756  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:24.887696  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:25.388493  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:25.888451  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:26.387822  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:26.888379  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:27.388361  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:27.888017  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:28.388584  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:28.887763  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:29.388547  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:29.887757  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:30.387781  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:30.888609  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:31.387635  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:31.888171  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:32.388412  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:32.888528  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:33.387792  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:33.888580  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:34.388192  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:34.888392  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:35.388250  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:35.888600  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:36.388467  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:36.887895  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:37.387730  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:37.888542  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:38.388614  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:38.888493  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:39.387705  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:39.887637  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:40.388516  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:40.887751  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:41.387675  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:41.888681  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:42.387731  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:42.887637  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:43.388408  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:43.888201  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:44.387929  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:44.888382  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:45.387742  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:45.887563  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:46.388569  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:46.888449  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:47.388453  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:47.888066  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:48.387738  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:48.888486  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:49.388004  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:49.887783  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:50.388587  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:50.887797  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:51.388583  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:51.888281  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:52.387751  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:52.888303  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:53.388442  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:53.887964  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:54.387766  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:54.887669  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:55.388318  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:55.888676  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:56.387669  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:56.888505  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:57.387758  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:57.888403  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:58.388534  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:58.887712  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:59.388454  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:59.888308  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:00.387737  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:00.887766  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:01.387557  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:01.888179  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:02.387975  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:02.887807  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:03.387768  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:03.887658  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:04.387571  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:04.887653  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:05.388569  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:05.887566  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:06.387577  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:06.887577  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:07.388433  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:07.887764  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:07.887843  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:07.914157  528764 cri.go:89] found id: ""
	I1217 20:37:07.914172  528764 logs.go:282] 0 containers: []
	W1217 20:37:07.914179  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:07.914184  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:07.914241  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:07.939801  528764 cri.go:89] found id: ""
	I1217 20:37:07.939815  528764 logs.go:282] 0 containers: []
	W1217 20:37:07.939823  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:07.939828  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:07.939892  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:07.966197  528764 cri.go:89] found id: ""
	I1217 20:37:07.966213  528764 logs.go:282] 0 containers: []
	W1217 20:37:07.966221  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:07.966226  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:07.966284  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:07.997124  528764 cri.go:89] found id: ""
	I1217 20:37:07.997138  528764 logs.go:282] 0 containers: []
	W1217 20:37:07.997145  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:07.997150  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:07.997211  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:08.028280  528764 cri.go:89] found id: ""
	I1217 20:37:08.028295  528764 logs.go:282] 0 containers: []
	W1217 20:37:08.028302  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:08.028308  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:08.028368  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:08.058094  528764 cri.go:89] found id: ""
	I1217 20:37:08.058109  528764 logs.go:282] 0 containers: []
	W1217 20:37:08.058116  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:08.058121  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:08.058185  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:08.085720  528764 cri.go:89] found id: ""
	I1217 20:37:08.085736  528764 logs.go:282] 0 containers: []
	W1217 20:37:08.085744  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:08.085752  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:08.085763  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:08.150624  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:08.142553   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.142992   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.144649   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.145220   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.146782   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:08.142553   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.142992   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.144649   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.145220   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.146782   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:08.150636  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:08.150647  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:08.217929  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:08.217949  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:08.250550  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:08.250567  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:08.318542  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:08.318562  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:10.835004  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:10.846829  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:10.846892  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:10.877739  528764 cri.go:89] found id: ""
	I1217 20:37:10.877756  528764 logs.go:282] 0 containers: []
	W1217 20:37:10.877762  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:10.877768  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:10.877829  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:10.903713  528764 cri.go:89] found id: ""
	I1217 20:37:10.903727  528764 logs.go:282] 0 containers: []
	W1217 20:37:10.903735  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:10.903740  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:10.903802  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:10.931733  528764 cri.go:89] found id: ""
	I1217 20:37:10.931747  528764 logs.go:282] 0 containers: []
	W1217 20:37:10.931754  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:10.931759  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:10.931818  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:10.957707  528764 cri.go:89] found id: ""
	I1217 20:37:10.957722  528764 logs.go:282] 0 containers: []
	W1217 20:37:10.957729  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:10.957735  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:10.957793  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:10.986438  528764 cri.go:89] found id: ""
	I1217 20:37:10.986452  528764 logs.go:282] 0 containers: []
	W1217 20:37:10.986459  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:10.986464  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:10.986530  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:11.014361  528764 cri.go:89] found id: ""
	I1217 20:37:11.014385  528764 logs.go:282] 0 containers: []
	W1217 20:37:11.014393  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:11.014402  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:11.014462  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:11.041366  528764 cri.go:89] found id: ""
	I1217 20:37:11.041381  528764 logs.go:282] 0 containers: []
	W1217 20:37:11.041388  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:11.041401  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:11.041411  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:11.056502  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:11.056519  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:11.122467  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:11.114176   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.114734   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.116369   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.116862   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.118392   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:11.114176   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.114734   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.116369   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.116862   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.118392   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:11.122477  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:11.122486  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:11.190244  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:11.190265  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:11.220700  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:11.220717  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:13.792757  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:13.802840  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:13.802899  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:13.836386  528764 cri.go:89] found id: ""
	I1217 20:37:13.836401  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.836408  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:13.836415  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:13.836471  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:13.870570  528764 cri.go:89] found id: ""
	I1217 20:37:13.870585  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.870592  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:13.870597  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:13.870656  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:13.898823  528764 cri.go:89] found id: ""
	I1217 20:37:13.898837  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.898845  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:13.898850  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:13.898908  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:13.926200  528764 cri.go:89] found id: ""
	I1217 20:37:13.926214  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.926221  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:13.926226  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:13.926284  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:13.952625  528764 cri.go:89] found id: ""
	I1217 20:37:13.952639  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.952647  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:13.952652  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:13.952711  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:13.978517  528764 cri.go:89] found id: ""
	I1217 20:37:13.978531  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.978539  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:13.978544  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:13.978602  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:14.010201  528764 cri.go:89] found id: ""
	I1217 20:37:14.010215  528764 logs.go:282] 0 containers: []
	W1217 20:37:14.010223  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:14.010231  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:14.010242  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:14.075917  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:14.075936  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:14.091123  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:14.091142  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:14.155624  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:14.146305   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.147214   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.149124   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.149628   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.151335   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:14.146305   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.147214   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.149124   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.149628   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.151335   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:14.155636  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:14.155647  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:14.224215  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:14.224237  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:16.756286  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:16.766692  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:16.766752  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:16.795671  528764 cri.go:89] found id: ""
	I1217 20:37:16.795692  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.795700  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:16.795705  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:16.795762  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:16.829850  528764 cri.go:89] found id: ""
	I1217 20:37:16.829863  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.829870  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:16.829875  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:16.829932  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:16.860495  528764 cri.go:89] found id: ""
	I1217 20:37:16.860509  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.860516  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:16.860521  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:16.860580  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:16.888120  528764 cri.go:89] found id: ""
	I1217 20:37:16.888133  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.888141  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:16.888146  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:16.888201  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:16.918449  528764 cri.go:89] found id: ""
	I1217 20:37:16.918463  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.918469  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:16.918484  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:16.918542  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:16.948626  528764 cri.go:89] found id: ""
	I1217 20:37:16.948652  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.948659  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:16.948665  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:16.948729  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:16.977608  528764 cri.go:89] found id: ""
	I1217 20:37:16.977622  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.977630  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:16.977637  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:16.977647  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:17.042493  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:17.042513  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:17.057131  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:17.057148  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:17.125378  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:17.116772   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.117492   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.119295   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.119699   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.121218   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:17.116772   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.117492   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.119295   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.119699   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.121218   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:17.125389  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:17.125400  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:17.192802  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:17.192822  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:19.720869  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:19.730761  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:19.730822  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:19.757595  528764 cri.go:89] found id: ""
	I1217 20:37:19.757609  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.757617  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:19.757622  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:19.757679  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:19.783074  528764 cri.go:89] found id: ""
	I1217 20:37:19.783087  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.783102  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:19.783108  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:19.783165  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:19.810405  528764 cri.go:89] found id: ""
	I1217 20:37:19.810419  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.810426  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:19.810432  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:19.810493  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:19.837744  528764 cri.go:89] found id: ""
	I1217 20:37:19.837758  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.837766  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:19.837771  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:19.837828  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:19.873857  528764 cri.go:89] found id: ""
	I1217 20:37:19.873872  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.873879  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:19.873884  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:19.873952  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:19.902376  528764 cri.go:89] found id: ""
	I1217 20:37:19.902390  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.902397  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:19.902402  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:19.902477  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:19.928530  528764 cri.go:89] found id: ""
	I1217 20:37:19.928544  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.928552  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:19.928559  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:19.928570  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:19.993175  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:19.984569   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.985337   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.986955   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.987601   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.989181   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:19.984569   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.985337   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.986955   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.987601   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.989181   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:19.993185  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:19.993196  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:20.066305  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:20.066326  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:20.099789  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:20.099806  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:20.165283  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:20.165304  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:22.681290  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:22.691134  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:22.691202  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:22.723831  528764 cri.go:89] found id: ""
	I1217 20:37:22.723845  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.723862  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:22.723868  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:22.723933  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:22.749315  528764 cri.go:89] found id: ""
	I1217 20:37:22.749329  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.749336  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:22.749341  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:22.749396  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:22.773712  528764 cri.go:89] found id: ""
	I1217 20:37:22.773738  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.773746  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:22.773751  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:22.773825  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:22.799128  528764 cri.go:89] found id: ""
	I1217 20:37:22.799147  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.799154  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:22.799159  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:22.799214  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:22.830333  528764 cri.go:89] found id: ""
	I1217 20:37:22.830347  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.830354  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:22.830359  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:22.830414  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:22.857658  528764 cri.go:89] found id: ""
	I1217 20:37:22.857671  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.857678  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:22.857683  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:22.857740  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:22.892187  528764 cri.go:89] found id: ""
	I1217 20:37:22.892202  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.892209  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:22.892217  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:22.892226  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:22.963552  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:22.963572  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:22.992259  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:22.992274  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:23.058615  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:23.058636  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:23.073409  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:23.073442  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:23.138641  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:23.129846   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.130677   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.132360   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.132912   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.134580   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:23.129846   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.130677   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.132360   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.132912   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.134580   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:25.638919  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:25.648946  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:25.649032  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:25.678111  528764 cri.go:89] found id: ""
	I1217 20:37:25.678127  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.678134  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:25.678140  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:25.678230  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:25.704834  528764 cri.go:89] found id: ""
	I1217 20:37:25.704848  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.704855  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:25.704861  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:25.704943  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:25.731274  528764 cri.go:89] found id: ""
	I1217 20:37:25.731287  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.731295  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:25.731300  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:25.731354  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:25.756601  528764 cri.go:89] found id: ""
	I1217 20:37:25.756615  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.756622  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:25.756628  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:25.756689  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:25.781743  528764 cri.go:89] found id: ""
	I1217 20:37:25.781757  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.781764  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:25.781787  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:25.781846  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:25.810686  528764 cri.go:89] found id: ""
	I1217 20:37:25.810699  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.810718  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:25.810724  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:25.810791  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:25.861184  528764 cri.go:89] found id: ""
	I1217 20:37:25.861200  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.861207  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:25.861215  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:25.861237  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:25.937980  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:25.938000  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:25.953961  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:25.953980  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:26.020362  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:26.011417   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.012115   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.013886   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.014423   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.016131   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:26.011417   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.012115   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.013886   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.014423   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.016131   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:26.020376  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:26.020387  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:26.092647  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:26.092669  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:28.622440  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:28.632675  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:28.632735  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:28.657198  528764 cri.go:89] found id: ""
	I1217 20:37:28.657213  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.657220  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:28.657226  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:28.657284  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:28.683432  528764 cri.go:89] found id: ""
	I1217 20:37:28.683446  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.683453  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:28.683458  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:28.683513  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:28.708948  528764 cri.go:89] found id: ""
	I1217 20:37:28.708962  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.708969  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:28.708975  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:28.709030  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:28.738615  528764 cri.go:89] found id: ""
	I1217 20:37:28.738629  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.738637  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:28.738642  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:28.738697  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:28.764458  528764 cri.go:89] found id: ""
	I1217 20:37:28.764472  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.764479  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:28.764484  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:28.764544  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:28.789220  528764 cri.go:89] found id: ""
	I1217 20:37:28.789234  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.789242  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:28.789247  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:28.789302  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:28.813820  528764 cri.go:89] found id: ""
	I1217 20:37:28.813835  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.813841  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:28.813848  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:28.813869  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:28.896349  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:28.887880   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.888593   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.890336   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.890868   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.892434   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:28.887880   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.888593   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.890336   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.890868   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.892434   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:28.896359  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:28.896369  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:28.964976  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:28.964996  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:28.995089  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:28.995105  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:29.073565  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:29.073593  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:31.589038  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:31.599070  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:31.599131  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:31.624604  528764 cri.go:89] found id: ""
	I1217 20:37:31.624619  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.624626  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:31.624631  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:31.624688  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:31.650593  528764 cri.go:89] found id: ""
	I1217 20:37:31.650608  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.650616  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:31.650621  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:31.650684  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:31.679069  528764 cri.go:89] found id: ""
	I1217 20:37:31.679084  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.679091  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:31.679096  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:31.679153  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:31.709079  528764 cri.go:89] found id: ""
	I1217 20:37:31.709093  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.709100  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:31.709105  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:31.709162  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:31.740223  528764 cri.go:89] found id: ""
	I1217 20:37:31.740237  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.740244  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:31.740252  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:31.740307  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:31.771855  528764 cri.go:89] found id: ""
	I1217 20:37:31.771869  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.771877  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:31.771883  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:31.771942  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:31.798992  528764 cri.go:89] found id: ""
	I1217 20:37:31.799006  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.799013  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:31.799021  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:31.799031  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:31.876265  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:31.876285  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:31.912678  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:31.912694  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:31.979473  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:31.979494  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:31.994138  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:31.994154  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:32.058919  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:32.050836   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.051662   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.053342   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.053690   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.055195   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:32.050836   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.051662   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.053342   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.053690   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.055195   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:34.560573  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:34.570410  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:34.570477  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:34.595394  528764 cri.go:89] found id: ""
	I1217 20:37:34.595407  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.595415  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:34.595420  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:34.595474  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:34.620347  528764 cri.go:89] found id: ""
	I1217 20:37:34.620362  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.620376  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:34.620382  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:34.620444  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:34.646173  528764 cri.go:89] found id: ""
	I1217 20:37:34.646188  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.646195  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:34.646200  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:34.646259  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:34.675076  528764 cri.go:89] found id: ""
	I1217 20:37:34.675090  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.675098  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:34.675103  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:34.675160  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:34.700382  528764 cri.go:89] found id: ""
	I1217 20:37:34.700396  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.700403  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:34.700414  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:34.700479  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:34.727372  528764 cri.go:89] found id: ""
	I1217 20:37:34.727387  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.727394  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:34.727400  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:34.727456  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:34.753290  528764 cri.go:89] found id: ""
	I1217 20:37:34.753305  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.753312  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:34.753319  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:34.753331  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:34.782001  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:34.782019  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:34.847492  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:34.847511  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:34.863498  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:34.863515  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:34.939936  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:34.931817   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.932582   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.934298   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.934763   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.936241   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:34.931817   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.932582   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.934298   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.934763   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.936241   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:34.939947  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:34.939958  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:37.511892  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:37.522041  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:37.522101  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:37.546092  528764 cri.go:89] found id: ""
	I1217 20:37:37.546106  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.546113  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:37.546119  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:37.546179  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:37.571827  528764 cri.go:89] found id: ""
	I1217 20:37:37.571841  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.571848  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:37.571853  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:37.571912  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:37.597752  528764 cri.go:89] found id: ""
	I1217 20:37:37.597766  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.597774  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:37.597779  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:37.597840  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:37.624088  528764 cri.go:89] found id: ""
	I1217 20:37:37.624102  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.624109  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:37.624114  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:37.624170  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:37.651097  528764 cri.go:89] found id: ""
	I1217 20:37:37.651112  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.651119  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:37.651125  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:37.651188  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:37.678706  528764 cri.go:89] found id: ""
	I1217 20:37:37.678720  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.678728  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:37.678743  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:37.678804  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:37.705805  528764 cri.go:89] found id: ""
	I1217 20:37:37.705817  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.705825  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:37.705833  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:37.705844  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:37.721021  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:37.721041  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:37.788297  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:37.779817   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.780331   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.782117   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.782612   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.784219   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:37.779817   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.780331   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.782117   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.782612   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.784219   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:37.788308  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:37.788318  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:37.865227  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:37.865247  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:37.897290  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:37.897308  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:40.462446  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:40.472823  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:40.472885  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:40.502899  528764 cri.go:89] found id: ""
	I1217 20:37:40.502914  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.502926  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:40.502931  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:40.502988  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:40.528131  528764 cri.go:89] found id: ""
	I1217 20:37:40.528144  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.528151  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:40.528156  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:40.528214  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:40.552632  528764 cri.go:89] found id: ""
	I1217 20:37:40.552646  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.552653  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:40.552659  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:40.552715  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:40.578013  528764 cri.go:89] found id: ""
	I1217 20:37:40.578028  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.578035  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:40.578042  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:40.578100  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:40.604172  528764 cri.go:89] found id: ""
	I1217 20:37:40.604186  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.604193  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:40.604198  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:40.604253  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:40.629837  528764 cri.go:89] found id: ""
	I1217 20:37:40.629851  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.629867  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:40.629872  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:40.629931  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:40.656555  528764 cri.go:89] found id: ""
	I1217 20:37:40.656568  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.656576  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:40.656583  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:40.656593  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:40.670930  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:40.670946  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:40.736814  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:40.728916   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.729413   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.731058   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.731457   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.733254   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:40.728916   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.729413   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.731058   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.731457   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.733254   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:40.736824  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:40.736835  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:40.803782  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:40.803800  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:40.851556  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:40.851572  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:43.430627  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:43.440939  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:43.441000  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:43.470749  528764 cri.go:89] found id: ""
	I1217 20:37:43.470764  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.470771  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:43.470777  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:43.470833  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:43.495753  528764 cri.go:89] found id: ""
	I1217 20:37:43.495766  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.495774  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:43.495779  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:43.495836  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:43.521880  528764 cri.go:89] found id: ""
	I1217 20:37:43.521896  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.521903  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:43.521908  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:43.521971  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:43.547990  528764 cri.go:89] found id: ""
	I1217 20:37:43.548004  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.548012  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:43.548018  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:43.548080  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:43.576401  528764 cri.go:89] found id: ""
	I1217 20:37:43.576415  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.576422  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:43.576427  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:43.576485  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:43.604828  528764 cri.go:89] found id: ""
	I1217 20:37:43.604840  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.604848  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:43.604853  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:43.604909  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:43.636907  528764 cri.go:89] found id: ""
	I1217 20:37:43.636920  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.636927  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:43.636935  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:43.636945  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:43.701148  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:43.701165  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:43.715342  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:43.715357  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:43.787937  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:43.780156   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.780601   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.782216   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.782718   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.784167   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:43.780156   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.780601   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.782216   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.782718   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.784167   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:43.787957  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:43.787968  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:43.858959  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:43.858978  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:46.395799  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:46.406118  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:46.406190  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:46.433062  528764 cri.go:89] found id: ""
	I1217 20:37:46.433076  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.433083  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:46.433089  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:46.433151  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:46.459553  528764 cri.go:89] found id: ""
	I1217 20:37:46.459568  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.459575  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:46.459604  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:46.459668  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:46.484831  528764 cri.go:89] found id: ""
	I1217 20:37:46.484845  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.484853  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:46.484858  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:46.484920  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:46.509669  528764 cri.go:89] found id: ""
	I1217 20:37:46.509683  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.509690  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:46.509695  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:46.509752  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:46.534227  528764 cri.go:89] found id: ""
	I1217 20:37:46.534242  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.534254  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:46.534260  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:46.534316  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:46.563383  528764 cri.go:89] found id: ""
	I1217 20:37:46.563397  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.563405  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:46.563411  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:46.563476  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:46.589321  528764 cri.go:89] found id: ""
	I1217 20:37:46.589335  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.589342  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:46.589350  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:46.589364  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:46.654894  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:46.654914  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:46.669806  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:46.669822  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:46.731726  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:46.723562   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.724128   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.725849   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.726343   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.727890   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:46.723562   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.724128   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.725849   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.726343   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.727890   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:46.731737  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:46.731763  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:46.799300  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:46.799320  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:49.348034  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:49.358157  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:49.358218  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:49.382823  528764 cri.go:89] found id: ""
	I1217 20:37:49.382837  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.382844  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:49.382849  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:49.382917  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:49.409079  528764 cri.go:89] found id: ""
	I1217 20:37:49.409094  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.409101  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:49.409106  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:49.409162  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:49.434313  528764 cri.go:89] found id: ""
	I1217 20:37:49.434327  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.434340  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:49.434354  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:49.434426  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:49.460512  528764 cri.go:89] found id: ""
	I1217 20:37:49.460527  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.460535  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:49.460551  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:49.460609  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:49.486735  528764 cri.go:89] found id: ""
	I1217 20:37:49.486748  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.486756  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:49.486762  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:49.486830  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:49.512071  528764 cri.go:89] found id: ""
	I1217 20:37:49.512085  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.512092  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:49.512098  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:49.512155  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:49.541263  528764 cri.go:89] found id: ""
	I1217 20:37:49.541277  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.541284  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:49.541293  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:49.541310  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:49.570361  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:49.570378  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:49.638598  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:49.638618  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:49.653362  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:49.653381  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:49.715767  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:49.708109   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.708580   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.710179   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.710574   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.712039   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:49.708109   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.708580   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.710179   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.710574   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.712039   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:49.715778  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:49.715788  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:52.283800  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:52.293434  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:52.293494  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:52.318791  528764 cri.go:89] found id: ""
	I1217 20:37:52.318805  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.318812  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:52.318818  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:52.318876  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:52.344510  528764 cri.go:89] found id: ""
	I1217 20:37:52.344525  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.344543  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:52.344549  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:52.344607  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:52.369118  528764 cri.go:89] found id: ""
	I1217 20:37:52.369132  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.369140  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:52.369145  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:52.369200  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:52.394333  528764 cri.go:89] found id: ""
	I1217 20:37:52.394346  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.394377  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:52.394383  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:52.394448  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:52.419501  528764 cri.go:89] found id: ""
	I1217 20:37:52.419525  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.419532  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:52.419537  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:52.419626  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:52.448909  528764 cri.go:89] found id: ""
	I1217 20:37:52.448923  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.448930  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:52.448936  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:52.449018  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:52.478490  528764 cri.go:89] found id: ""
	I1217 20:37:52.478513  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.478521  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:52.478529  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:52.478539  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:52.542920  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:52.542939  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:52.558035  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:52.558052  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:52.621690  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:52.613254   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.613953   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.615616   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.615996   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.617541   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:52.613254   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.613953   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.615616   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.615996   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.617541   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:52.621710  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:52.621721  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:52.689051  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:52.689070  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:55.225326  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:55.235484  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:55.235545  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:55.260455  528764 cri.go:89] found id: ""
	I1217 20:37:55.260469  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.260477  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:55.260482  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:55.260542  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:55.285381  528764 cri.go:89] found id: ""
	I1217 20:37:55.285396  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.285404  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:55.285409  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:55.285464  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:55.311167  528764 cri.go:89] found id: ""
	I1217 20:37:55.311181  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.311188  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:55.311194  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:55.311266  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:55.336553  528764 cri.go:89] found id: ""
	I1217 20:37:55.336568  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.336575  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:55.336580  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:55.336636  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:55.362555  528764 cri.go:89] found id: ""
	I1217 20:37:55.362569  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.362576  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:55.362582  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:55.362636  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:55.392446  528764 cri.go:89] found id: ""
	I1217 20:37:55.392460  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.392468  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:55.392473  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:55.392529  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:55.421227  528764 cri.go:89] found id: ""
	I1217 20:37:55.421242  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.421250  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:55.421257  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:55.421267  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:55.452467  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:55.452485  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:55.520333  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:55.520354  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:55.535397  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:55.535423  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:55.600267  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:55.591521   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.592108   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.593636   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.594292   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.596071   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:55.591521   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.592108   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.593636   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.594292   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.596071   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:55.600278  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:55.600290  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:58.172840  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:58.183231  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:58.183290  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:58.207527  528764 cri.go:89] found id: ""
	I1217 20:37:58.207541  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.207548  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:58.207553  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:58.207649  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:58.232533  528764 cri.go:89] found id: ""
	I1217 20:37:58.232547  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.232555  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:58.232559  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:58.232613  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:58.257969  528764 cri.go:89] found id: ""
	I1217 20:37:58.257983  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.257990  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:58.257996  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:58.258051  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:58.283047  528764 cri.go:89] found id: ""
	I1217 20:37:58.283060  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.283067  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:58.283072  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:58.283126  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:58.308494  528764 cri.go:89] found id: ""
	I1217 20:37:58.308508  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.308515  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:58.308521  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:58.308578  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:58.333008  528764 cri.go:89] found id: ""
	I1217 20:37:58.333022  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.333029  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:58.333035  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:58.333087  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:58.363097  528764 cri.go:89] found id: ""
	I1217 20:37:58.363111  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.363118  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:58.363126  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:58.363145  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:58.428415  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:58.419398   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.420110   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.421854   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.422467   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.424133   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:58.419398   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.420110   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.421854   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.422467   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.424133   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:58.428426  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:58.428437  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:58.497159  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:58.497179  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:58.528904  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:58.528921  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:58.594783  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:58.594803  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:01.111545  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:01.123462  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:01.123520  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:01.152472  528764 cri.go:89] found id: ""
	I1217 20:38:01.152487  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.152494  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:01.152499  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:01.152561  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:01.178899  528764 cri.go:89] found id: ""
	I1217 20:38:01.178913  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.178921  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:01.178926  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:01.178983  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:01.206687  528764 cri.go:89] found id: ""
	I1217 20:38:01.206701  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.206709  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:01.206714  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:01.206771  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:01.232497  528764 cri.go:89] found id: ""
	I1217 20:38:01.232511  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.232519  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:01.232524  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:01.232579  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:01.261011  528764 cri.go:89] found id: ""
	I1217 20:38:01.261025  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.261032  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:01.261037  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:01.261098  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:01.286117  528764 cri.go:89] found id: ""
	I1217 20:38:01.286132  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.286150  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:01.286156  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:01.286222  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:01.312040  528764 cri.go:89] found id: ""
	I1217 20:38:01.312055  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.312062  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:01.312069  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:01.312080  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:01.382670  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:01.382692  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:01.414378  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:01.414394  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:01.482999  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:01.483019  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:01.497972  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:01.497987  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:01.566351  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:01.557988   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.558761   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.560515   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.561020   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.562479   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:01.557988   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.558761   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.560515   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.561020   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.562479   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:04.066612  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:04.079947  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:04.080010  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:04.114202  528764 cri.go:89] found id: ""
	I1217 20:38:04.114216  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.114223  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:04.114228  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:04.114294  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:04.144225  528764 cri.go:89] found id: ""
	I1217 20:38:04.144238  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.144246  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:04.144250  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:04.144306  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:04.174041  528764 cri.go:89] found id: ""
	I1217 20:38:04.174055  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.174066  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:04.174072  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:04.174138  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:04.198282  528764 cri.go:89] found id: ""
	I1217 20:38:04.198296  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.198304  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:04.198309  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:04.198381  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:04.223855  528764 cri.go:89] found id: ""
	I1217 20:38:04.223869  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.223888  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:04.223897  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:04.223965  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:04.249576  528764 cri.go:89] found id: ""
	I1217 20:38:04.249592  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.249599  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:04.249604  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:04.249667  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:04.278330  528764 cri.go:89] found id: ""
	I1217 20:38:04.278344  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.278351  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:04.278359  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:04.278369  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:04.346075  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:04.346098  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:04.379272  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:04.379287  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:04.446775  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:04.446795  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:04.461788  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:04.461804  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:04.526831  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:04.519073   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.519534   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.520989   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.521420   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.522964   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:04.519073   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.519534   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.520989   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.521420   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.522964   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:07.028018  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:07.038329  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:07.038394  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:07.070882  528764 cri.go:89] found id: ""
	I1217 20:38:07.070911  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.070919  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:07.070925  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:07.070991  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:07.104836  528764 cri.go:89] found id: ""
	I1217 20:38:07.104850  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.104857  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:07.104863  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:07.104932  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:07.141894  528764 cri.go:89] found id: ""
	I1217 20:38:07.141908  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.141916  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:07.141921  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:07.141990  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:07.169039  528764 cri.go:89] found id: ""
	I1217 20:38:07.169053  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.169061  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:07.169066  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:07.169123  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:07.194478  528764 cri.go:89] found id: ""
	I1217 20:38:07.194501  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.194509  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:07.194514  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:07.194579  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:07.219609  528764 cri.go:89] found id: ""
	I1217 20:38:07.219624  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.219632  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:07.219638  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:07.219705  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:07.243819  528764 cri.go:89] found id: ""
	I1217 20:38:07.243832  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.243840  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:07.243847  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:07.243857  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:07.311464  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:07.311483  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:07.343698  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:07.343751  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:07.410312  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:07.410332  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:07.424918  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:07.424934  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:07.487872  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:07.479467   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.480073   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.481799   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.482374   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.484022   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:07.479467   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.480073   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.481799   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.482374   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.484022   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:09.989569  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:10.015377  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:10.015448  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:10.044563  528764 cri.go:89] found id: ""
	I1217 20:38:10.044582  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.044590  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:10.044596  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:10.044659  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:10.082544  528764 cri.go:89] found id: ""
	I1217 20:38:10.082572  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.082579  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:10.082585  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:10.082655  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:10.111998  528764 cri.go:89] found id: ""
	I1217 20:38:10.112021  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.112028  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:10.112034  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:10.112090  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:10.143847  528764 cri.go:89] found id: ""
	I1217 20:38:10.143875  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.143883  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:10.143888  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:10.143959  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:10.169935  528764 cri.go:89] found id: ""
	I1217 20:38:10.169948  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.169956  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:10.169961  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:10.170035  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:10.199354  528764 cri.go:89] found id: ""
	I1217 20:38:10.199367  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.199389  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:10.199395  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:10.199469  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:10.224921  528764 cri.go:89] found id: ""
	I1217 20:38:10.224934  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.224942  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:10.224950  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:10.224961  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:10.292927  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:10.292947  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:10.321993  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:10.322010  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:10.388855  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:10.388876  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:10.404211  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:10.404228  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:10.466886  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:10.458803   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.459494   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.461201   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.461646   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.463180   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:10.458803   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.459494   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.461201   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.461646   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.463180   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:12.968194  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:12.978084  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:12.978143  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:13.006691  528764 cri.go:89] found id: ""
	I1217 20:38:13.006706  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.006713  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:13.006719  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:13.006779  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:13.032773  528764 cri.go:89] found id: ""
	I1217 20:38:13.032787  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.032795  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:13.032800  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:13.032854  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:13.059128  528764 cri.go:89] found id: ""
	I1217 20:38:13.059142  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.059150  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:13.059155  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:13.059213  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:13.093983  528764 cri.go:89] found id: ""
	I1217 20:38:13.093997  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.094005  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:13.094010  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:13.094066  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:13.136453  528764 cri.go:89] found id: ""
	I1217 20:38:13.136467  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.136474  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:13.136481  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:13.136536  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:13.166382  528764 cri.go:89] found id: ""
	I1217 20:38:13.166396  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.166403  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:13.166409  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:13.166476  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:13.194638  528764 cri.go:89] found id: ""
	I1217 20:38:13.194651  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.194658  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:13.194666  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:13.194689  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:13.261344  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:13.261362  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:13.276057  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:13.276073  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:13.341759  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:13.333469   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.334136   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.335845   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.336372   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.337969   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:13.333469   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.334136   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.335845   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.336372   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.337969   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:13.341769  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:13.341780  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:13.412593  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:13.412613  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:15.945731  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:15.956026  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:15.956085  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:15.980875  528764 cri.go:89] found id: ""
	I1217 20:38:15.980889  528764 logs.go:282] 0 containers: []
	W1217 20:38:15.980897  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:15.980902  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:15.980956  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:16.017238  528764 cri.go:89] found id: ""
	I1217 20:38:16.017253  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.017260  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:16.017265  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:16.017327  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:16.042662  528764 cri.go:89] found id: ""
	I1217 20:38:16.042676  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.042684  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:16.042700  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:16.042759  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:16.070239  528764 cri.go:89] found id: ""
	I1217 20:38:16.070253  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.070265  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:16.070281  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:16.070344  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:16.101763  528764 cri.go:89] found id: ""
	I1217 20:38:16.101777  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.101785  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:16.101802  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:16.101863  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:16.132808  528764 cri.go:89] found id: ""
	I1217 20:38:16.132822  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.132830  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:16.132835  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:16.132904  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:16.162901  528764 cri.go:89] found id: ""
	I1217 20:38:16.162925  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.162932  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:16.162940  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:16.162951  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:16.177475  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:16.177491  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:16.239620  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:16.231437   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.232280   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.233889   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.234228   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.235746   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:16.231437   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.232280   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.233889   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.234228   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.235746   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:16.239630  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:16.239641  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:16.306695  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:16.306714  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:16.338739  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:16.338754  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:18.906627  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:18.916877  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:18.916940  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:18.940995  528764 cri.go:89] found id: ""
	I1217 20:38:18.941009  528764 logs.go:282] 0 containers: []
	W1217 20:38:18.941016  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:18.941022  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:18.941090  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:18.967366  528764 cri.go:89] found id: ""
	I1217 20:38:18.967381  528764 logs.go:282] 0 containers: []
	W1217 20:38:18.967388  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:18.967393  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:18.967448  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:18.993265  528764 cri.go:89] found id: ""
	I1217 20:38:18.993279  528764 logs.go:282] 0 containers: []
	W1217 20:38:18.993286  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:18.993291  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:18.993345  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:19.020582  528764 cri.go:89] found id: ""
	I1217 20:38:19.020595  528764 logs.go:282] 0 containers: []
	W1217 20:38:19.020603  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:19.020608  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:19.020666  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:19.045982  528764 cri.go:89] found id: ""
	I1217 20:38:19.045996  528764 logs.go:282] 0 containers: []
	W1217 20:38:19.046005  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:19.046010  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:19.046069  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:19.073910  528764 cri.go:89] found id: ""
	I1217 20:38:19.073923  528764 logs.go:282] 0 containers: []
	W1217 20:38:19.073930  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:19.073936  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:19.073992  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:19.113478  528764 cri.go:89] found id: ""
	I1217 20:38:19.113491  528764 logs.go:282] 0 containers: []
	W1217 20:38:19.113499  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:19.113507  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:19.113517  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:19.181345  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:19.181364  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:19.196831  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:19.196848  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:19.262885  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:19.253623   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.254429   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.256066   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.256658   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.258445   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:19.253623   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.254429   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.256066   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.256658   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.258445   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:19.262896  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:19.262907  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:19.332927  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:19.332947  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:21.863218  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:21.873488  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:21.873552  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:21.901892  528764 cri.go:89] found id: ""
	I1217 20:38:21.901907  528764 logs.go:282] 0 containers: []
	W1217 20:38:21.901915  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:21.901930  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:21.901988  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:21.928067  528764 cri.go:89] found id: ""
	I1217 20:38:21.928080  528764 logs.go:282] 0 containers: []
	W1217 20:38:21.928087  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:21.928092  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:21.928149  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:21.953356  528764 cri.go:89] found id: ""
	I1217 20:38:21.953371  528764 logs.go:282] 0 containers: []
	W1217 20:38:21.953378  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:21.953383  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:21.953444  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:21.987415  528764 cri.go:89] found id: ""
	I1217 20:38:21.987428  528764 logs.go:282] 0 containers: []
	W1217 20:38:21.987436  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:21.987442  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:21.987509  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:22.016922  528764 cri.go:89] found id: ""
	I1217 20:38:22.016937  528764 logs.go:282] 0 containers: []
	W1217 20:38:22.016945  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:22.016951  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:22.017009  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:22.044463  528764 cri.go:89] found id: ""
	I1217 20:38:22.044477  528764 logs.go:282] 0 containers: []
	W1217 20:38:22.044484  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:22.044490  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:22.044545  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:22.072815  528764 cri.go:89] found id: ""
	I1217 20:38:22.072828  528764 logs.go:282] 0 containers: []
	W1217 20:38:22.072836  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:22.072844  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:22.072854  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:22.106754  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:22.106778  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:22.177000  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:22.177019  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:22.191928  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:22.191945  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:22.254841  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:22.246562   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.247341   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.249143   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.249615   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.251134   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:22.246562   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.247341   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.249143   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.249615   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.251134   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:22.254851  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:22.254862  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:24.826532  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:24.836772  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:24.836836  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:24.862693  528764 cri.go:89] found id: ""
	I1217 20:38:24.862706  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.862714  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:24.862719  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:24.862789  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:24.887641  528764 cri.go:89] found id: ""
	I1217 20:38:24.887656  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.887663  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:24.887668  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:24.887737  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:24.913131  528764 cri.go:89] found id: ""
	I1217 20:38:24.913145  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.913168  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:24.913174  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:24.913242  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:24.939734  528764 cri.go:89] found id: ""
	I1217 20:38:24.939748  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.939755  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:24.939760  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:24.939815  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:24.964904  528764 cri.go:89] found id: ""
	I1217 20:38:24.964919  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.964925  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:24.964930  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:24.964988  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:24.990333  528764 cri.go:89] found id: ""
	I1217 20:38:24.990348  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.990355  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:24.990361  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:24.990421  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:25.019872  528764 cri.go:89] found id: ""
	I1217 20:38:25.019887  528764 logs.go:282] 0 containers: []
	W1217 20:38:25.019895  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:25.019902  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:25.019914  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:25.036413  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:25.036438  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:25.112619  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:25.103911   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.104770   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.106472   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.107045   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.108652   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:25.103911   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.104770   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.106472   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.107045   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.108652   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:25.112632  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:25.112642  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:25.184378  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:25.184399  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:25.216673  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:25.216689  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:27.785567  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:27.796326  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:27.796391  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:27.825782  528764 cri.go:89] found id: ""
	I1217 20:38:27.825796  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.825804  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:27.825809  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:27.825864  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:27.850601  528764 cri.go:89] found id: ""
	I1217 20:38:27.850614  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.850627  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:27.850632  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:27.850700  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:27.876056  528764 cri.go:89] found id: ""
	I1217 20:38:27.876070  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.876082  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:27.876087  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:27.876151  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:27.901899  528764 cri.go:89] found id: ""
	I1217 20:38:27.901913  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.901920  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:27.901926  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:27.901997  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:27.931527  528764 cri.go:89] found id: ""
	I1217 20:38:27.931541  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.931548  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:27.931553  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:27.931627  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:27.956390  528764 cri.go:89] found id: ""
	I1217 20:38:27.956404  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.956411  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:27.956417  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:27.956473  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:27.985929  528764 cri.go:89] found id: ""
	I1217 20:38:27.985943  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.985951  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:27.985959  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:27.985970  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:28.054474  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:28.054492  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:28.070115  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:28.070132  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:28.151327  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:28.142186   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.142985   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.145194   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.145756   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.147299   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:28.142186   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.142985   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.145194   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.145756   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.147299   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:28.151337  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:28.151347  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:28.220518  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:28.220542  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:30.755166  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:30.765287  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:30.765345  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:30.790103  528764 cri.go:89] found id: ""
	I1217 20:38:30.790117  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.790139  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:30.790145  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:30.790209  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:30.815526  528764 cri.go:89] found id: ""
	I1217 20:38:30.815539  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.815547  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:30.815552  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:30.815647  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:30.841851  528764 cri.go:89] found id: ""
	I1217 20:38:30.841864  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.841884  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:30.841890  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:30.841963  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:30.866784  528764 cri.go:89] found id: ""
	I1217 20:38:30.866798  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.866829  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:30.866834  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:30.866922  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:30.892935  528764 cri.go:89] found id: ""
	I1217 20:38:30.892948  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.892956  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:30.892961  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:30.893017  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:30.918525  528764 cri.go:89] found id: ""
	I1217 20:38:30.918545  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.918552  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:30.918558  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:30.918624  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:30.946571  528764 cri.go:89] found id: ""
	I1217 20:38:30.946586  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.946593  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:30.946600  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:30.946620  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:31.016310  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:31.016330  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:31.031710  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:31.031729  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:31.121622  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:31.112851   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.113732   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.115400   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.115997   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.117664   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:31.112851   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.113732   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.115400   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.115997   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.117664   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:31.121632  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:31.121643  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:31.191069  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:31.191089  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:33.724221  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:33.734488  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:33.734549  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:33.761235  528764 cri.go:89] found id: ""
	I1217 20:38:33.761249  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.761256  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:33.761262  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:33.761322  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:33.787337  528764 cri.go:89] found id: ""
	I1217 20:38:33.787350  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.787358  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:33.787363  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:33.787432  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:33.812684  528764 cri.go:89] found id: ""
	I1217 20:38:33.812706  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.812714  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:33.812719  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:33.812784  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:33.842819  528764 cri.go:89] found id: ""
	I1217 20:38:33.842832  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.842854  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:33.842865  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:33.842929  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:33.868875  528764 cri.go:89] found id: ""
	I1217 20:38:33.868889  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.868897  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:33.868902  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:33.868961  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:33.898309  528764 cri.go:89] found id: ""
	I1217 20:38:33.898323  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.898331  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:33.898356  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:33.898425  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:33.924913  528764 cri.go:89] found id: ""
	I1217 20:38:33.924927  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.924935  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:33.924943  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:33.924957  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:33.990911  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:33.990930  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:34.008276  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:34.008297  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:34.087503  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:34.076899   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.077640   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.079660   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.080396   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.083391   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:34.076899   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.077640   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.079660   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.080396   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.083391   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:34.087514  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:34.087537  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:34.163882  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:34.163901  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:36.694644  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:36.704742  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:36.704803  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:36.730340  528764 cri.go:89] found id: ""
	I1217 20:38:36.730354  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.730363  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:36.730369  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:36.730426  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:36.757473  528764 cri.go:89] found id: ""
	I1217 20:38:36.757486  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.757493  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:36.757499  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:36.757554  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:36.786113  528764 cri.go:89] found id: ""
	I1217 20:38:36.786127  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.786135  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:36.786140  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:36.786246  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:36.812385  528764 cri.go:89] found id: ""
	I1217 20:38:36.812399  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.812407  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:36.812412  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:36.812471  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:36.837075  528764 cri.go:89] found id: ""
	I1217 20:38:36.837088  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.837095  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:36.837100  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:36.837156  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:36.866713  528764 cri.go:89] found id: ""
	I1217 20:38:36.866727  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.866734  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:36.866740  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:36.866808  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:36.896063  528764 cri.go:89] found id: ""
	I1217 20:38:36.896078  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.896085  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:36.896093  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:36.896106  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:36.961772  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:36.961793  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:36.976619  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:36.976637  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:37.049152  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:37.040423   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.041223   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.042928   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.043675   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.045309   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:37.040423   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.041223   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.042928   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.043675   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.045309   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:37.049163  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:37.049174  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:37.119769  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:37.119788  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:39.651068  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:39.661185  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:39.661251  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:39.686602  528764 cri.go:89] found id: ""
	I1217 20:38:39.686616  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.686623  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:39.686628  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:39.686685  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:39.711563  528764 cri.go:89] found id: ""
	I1217 20:38:39.711577  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.711602  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:39.711608  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:39.711674  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:39.738013  528764 cri.go:89] found id: ""
	I1217 20:38:39.738027  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.738034  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:39.738039  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:39.738094  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:39.763309  528764 cri.go:89] found id: ""
	I1217 20:38:39.763323  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.763330  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:39.763336  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:39.763396  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:39.788615  528764 cri.go:89] found id: ""
	I1217 20:38:39.788628  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.788640  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:39.788645  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:39.788701  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:39.813921  528764 cri.go:89] found id: ""
	I1217 20:38:39.813935  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.813942  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:39.813948  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:39.814006  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:39.843230  528764 cri.go:89] found id: ""
	I1217 20:38:39.843244  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.843252  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:39.843260  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:39.843271  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:39.857938  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:39.857954  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:39.921708  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:39.913994   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.914417   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.915990   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.916326   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.917797   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:39.913994   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.914417   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.915990   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.916326   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.917797   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:39.921717  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:39.921730  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:39.992421  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:39.992444  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:40.032432  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:40.032451  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:42.605010  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:42.614872  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:42.614934  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:42.639899  528764 cri.go:89] found id: ""
	I1217 20:38:42.639913  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.639920  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:42.639926  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:42.639996  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:42.670021  528764 cri.go:89] found id: ""
	I1217 20:38:42.670036  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.670049  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:42.670055  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:42.670116  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:42.696223  528764 cri.go:89] found id: ""
	I1217 20:38:42.696237  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.696244  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:42.696251  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:42.696310  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:42.722579  528764 cri.go:89] found id: ""
	I1217 20:38:42.722593  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.722606  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:42.722612  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:42.722668  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:42.747677  528764 cri.go:89] found id: ""
	I1217 20:38:42.747690  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.747698  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:42.747703  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:42.747764  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:42.774015  528764 cri.go:89] found id: ""
	I1217 20:38:42.774029  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.774036  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:42.774053  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:42.774112  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:42.799502  528764 cri.go:89] found id: ""
	I1217 20:38:42.799516  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.799525  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:42.799533  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:42.799543  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:42.865035  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:42.865058  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:42.880616  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:42.880633  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:42.949493  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:42.939951   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.940704   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.942455   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.943033   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.944768   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:42.939951   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.940704   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.942455   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.943033   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.944768   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:42.949505  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:42.949528  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:43.019292  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:43.019312  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:45.548705  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:45.558968  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:45.559027  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:45.583967  528764 cri.go:89] found id: ""
	I1217 20:38:45.583982  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.583989  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:45.583994  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:45.584050  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:45.609420  528764 cri.go:89] found id: ""
	I1217 20:38:45.609434  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.609441  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:45.609447  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:45.609508  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:45.640522  528764 cri.go:89] found id: ""
	I1217 20:38:45.640546  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.640554  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:45.640559  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:45.640625  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:45.666349  528764 cri.go:89] found id: ""
	I1217 20:38:45.666362  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.666369  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:45.666375  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:45.666432  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:45.696168  528764 cri.go:89] found id: ""
	I1217 20:38:45.696182  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.696189  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:45.696194  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:45.696255  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:45.719763  528764 cri.go:89] found id: ""
	I1217 20:38:45.719777  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.719784  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:45.719790  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:45.719847  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:45.744391  528764 cri.go:89] found id: ""
	I1217 20:38:45.744405  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.744412  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:45.744421  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:45.744451  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:45.809635  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:45.809656  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:45.824260  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:45.824275  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:45.887725  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:45.879670   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.880327   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.881887   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.882340   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.883862   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:45.879670   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.880327   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.881887   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.882340   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.883862   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:45.887735  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:45.887746  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:45.955422  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:45.955441  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:48.485624  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:48.495313  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:48.495374  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:48.520059  528764 cri.go:89] found id: ""
	I1217 20:38:48.520074  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.520081  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:48.520087  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:48.520143  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:48.545655  528764 cri.go:89] found id: ""
	I1217 20:38:48.545670  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.545677  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:48.545682  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:48.545740  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:48.570521  528764 cri.go:89] found id: ""
	I1217 20:38:48.570535  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.570543  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:48.570548  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:48.570606  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:48.596861  528764 cri.go:89] found id: ""
	I1217 20:38:48.596875  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.596883  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:48.596888  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:48.596946  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:48.623093  528764 cri.go:89] found id: ""
	I1217 20:38:48.623115  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.623123  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:48.623128  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:48.623203  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:48.648854  528764 cri.go:89] found id: ""
	I1217 20:38:48.648868  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.648876  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:48.648881  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:48.648953  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:48.673887  528764 cri.go:89] found id: ""
	I1217 20:38:48.673911  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.673919  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:48.673928  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:48.673939  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:48.739985  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:48.740004  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:48.754655  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:48.754672  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:48.818714  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:48.810661   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.811171   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.812860   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.813319   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.814815   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:48.810661   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.811171   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.812860   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.813319   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.814815   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:48.818724  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:48.818734  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:48.889255  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:48.889281  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:51.421767  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:51.432066  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:51.432137  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:51.461100  528764 cri.go:89] found id: ""
	I1217 20:38:51.461115  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.461123  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:51.461132  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:51.461205  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:51.493482  528764 cri.go:89] found id: ""
	I1217 20:38:51.493495  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.493503  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:51.493508  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:51.493573  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:51.523360  528764 cri.go:89] found id: ""
	I1217 20:38:51.523374  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.523382  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:51.523387  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:51.523443  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:51.549129  528764 cri.go:89] found id: ""
	I1217 20:38:51.549143  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.549151  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:51.549156  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:51.549212  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:51.575573  528764 cri.go:89] found id: ""
	I1217 20:38:51.575613  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.575621  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:51.575631  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:51.575698  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:51.601059  528764 cri.go:89] found id: ""
	I1217 20:38:51.601074  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.601081  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:51.601087  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:51.601153  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:51.626446  528764 cri.go:89] found id: ""
	I1217 20:38:51.626461  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.626468  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:51.626476  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:51.626487  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:51.693973  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:51.693993  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:51.724023  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:51.724039  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:51.788885  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:51.788906  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:51.803552  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:51.803568  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:51.866022  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:51.858220   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.858930   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.860542   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.860857   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.862309   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:51.858220   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.858930   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.860542   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.860857   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.862309   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:54.367685  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:54.378312  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:54.378367  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:54.407726  528764 cri.go:89] found id: ""
	I1217 20:38:54.407744  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.407752  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:54.407758  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:54.407815  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:54.432535  528764 cri.go:89] found id: ""
	I1217 20:38:54.432550  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.432557  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:54.432562  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:54.432623  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:54.458438  528764 cri.go:89] found id: ""
	I1217 20:38:54.458453  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.458460  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:54.458465  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:54.458527  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:54.487170  528764 cri.go:89] found id: ""
	I1217 20:38:54.487184  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.487191  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:54.487198  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:54.487254  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:54.512876  528764 cri.go:89] found id: ""
	I1217 20:38:54.512890  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.512897  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:54.512902  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:54.512959  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:54.537031  528764 cri.go:89] found id: ""
	I1217 20:38:54.537044  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.537051  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:54.537056  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:54.537112  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:54.562349  528764 cri.go:89] found id: ""
	I1217 20:38:54.562363  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.562387  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:54.562396  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:54.562406  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:54.628118  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:54.628137  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:54.642915  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:54.642932  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:54.707130  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:54.699152   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.699635   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.701269   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.701677   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.703119   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:54.699152   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.699635   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.701269   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.701677   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.703119   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:54.707141  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:54.707152  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:54.775317  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:54.775338  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:57.310952  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:57.322922  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:57.322983  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:57.357392  528764 cri.go:89] found id: ""
	I1217 20:38:57.357406  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.357413  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:57.357420  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:57.357476  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:57.384349  528764 cri.go:89] found id: ""
	I1217 20:38:57.384363  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.384373  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:57.384378  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:57.384434  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:57.412576  528764 cri.go:89] found id: ""
	I1217 20:38:57.412590  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.412598  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:57.412603  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:57.412662  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:57.439190  528764 cri.go:89] found id: ""
	I1217 20:38:57.439205  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.439212  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:57.439217  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:57.439305  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:57.466239  528764 cri.go:89] found id: ""
	I1217 20:38:57.466253  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.466262  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:57.466267  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:57.466324  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:57.491495  528764 cri.go:89] found id: ""
	I1217 20:38:57.491508  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.491516  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:57.491522  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:57.491597  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:57.517009  528764 cri.go:89] found id: ""
	I1217 20:38:57.517023  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.517030  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:57.517038  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:57.517048  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:57.582648  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:57.582669  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:57.597231  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:57.597249  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:57.663163  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:57.654987   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.655397   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.656981   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.657561   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.659204   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:57.654987   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.655397   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.656981   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.657561   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.659204   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:57.663174  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:57.663186  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:57.735126  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:57.735151  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:00.265877  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:00.292750  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:00.292841  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:00.342493  528764 cri.go:89] found id: ""
	I1217 20:39:00.342529  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.342553  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:00.342560  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:00.342673  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:00.389833  528764 cri.go:89] found id: ""
	I1217 20:39:00.389858  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.389866  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:00.389871  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:00.389943  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:00.427417  528764 cri.go:89] found id: ""
	I1217 20:39:00.427442  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.427450  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:00.427455  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:00.427525  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:00.455698  528764 cri.go:89] found id: ""
	I1217 20:39:00.455712  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.455720  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:00.455726  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:00.455784  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:00.487535  528764 cri.go:89] found id: ""
	I1217 20:39:00.487551  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.487558  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:00.487576  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:00.487666  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:00.514228  528764 cri.go:89] found id: ""
	I1217 20:39:00.514243  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.514251  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:00.514256  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:00.514315  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:00.540536  528764 cri.go:89] found id: ""
	I1217 20:39:00.540561  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.540569  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:00.540576  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:00.540586  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:00.607064  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:00.607084  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:00.639882  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:00.639899  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:00.705607  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:00.705629  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:00.721491  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:00.721506  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:00.784593  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:00.776120   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.776725   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.778453   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.778972   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.780702   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:00.776120   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.776725   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.778453   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.778972   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.780702   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:03.284822  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:03.295036  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:03.295097  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:03.333750  528764 cri.go:89] found id: ""
	I1217 20:39:03.333778  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.333786  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:03.333792  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:03.333861  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:03.363983  528764 cri.go:89] found id: ""
	I1217 20:39:03.363997  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.364004  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:03.364024  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:03.364082  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:03.392963  528764 cri.go:89] found id: ""
	I1217 20:39:03.392977  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.392984  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:03.392989  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:03.393044  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:03.419023  528764 cri.go:89] found id: ""
	I1217 20:39:03.419039  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.419046  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:03.419052  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:03.419108  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:03.444813  528764 cri.go:89] found id: ""
	I1217 20:39:03.444826  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.444833  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:03.444838  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:03.444895  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:03.468964  528764 cri.go:89] found id: ""
	I1217 20:39:03.468978  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.468986  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:03.468996  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:03.469053  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:03.494050  528764 cri.go:89] found id: ""
	I1217 20:39:03.494063  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.494071  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:03.494078  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:03.494087  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:03.559830  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:03.559849  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:03.575390  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:03.575407  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:03.642132  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:03.634093   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.634724   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.636305   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.636854   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.638302   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:03.634093   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.634724   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.636305   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.636854   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.638302   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:03.642142  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:03.642153  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:03.710317  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:03.710339  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:06.242034  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:06.252695  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:06.252759  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:06.278446  528764 cri.go:89] found id: ""
	I1217 20:39:06.278460  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.278467  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:06.278477  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:06.278573  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:06.304597  528764 cri.go:89] found id: ""
	I1217 20:39:06.304612  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.304620  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:06.304630  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:06.304702  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:06.345678  528764 cri.go:89] found id: ""
	I1217 20:39:06.345693  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.345700  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:06.345706  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:06.345764  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:06.381455  528764 cri.go:89] found id: ""
	I1217 20:39:06.381469  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.381476  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:06.381482  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:06.381542  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:06.410677  528764 cri.go:89] found id: ""
	I1217 20:39:06.410691  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.410698  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:06.410704  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:06.410774  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:06.436535  528764 cri.go:89] found id: ""
	I1217 20:39:06.436549  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.436556  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:06.436564  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:06.436621  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:06.467306  528764 cri.go:89] found id: ""
	I1217 20:39:06.467320  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.467327  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:06.467335  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:06.467345  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:06.533557  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:06.533577  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:06.548883  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:06.548901  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:06.613032  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:06.604590   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.605314   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.606990   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.607539   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.609092   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:06.604590   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.605314   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.606990   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.607539   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.609092   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:06.613048  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:06.613068  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:06.682237  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:06.682258  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:09.211382  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:09.221300  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:09.221359  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:09.246764  528764 cri.go:89] found id: ""
	I1217 20:39:09.246778  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.246785  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:09.246790  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:09.246867  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:09.271248  528764 cri.go:89] found id: ""
	I1217 20:39:09.271261  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.271268  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:09.271273  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:09.271343  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:09.296093  528764 cri.go:89] found id: ""
	I1217 20:39:09.296107  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.296114  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:09.296120  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:09.296175  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:09.325215  528764 cri.go:89] found id: ""
	I1217 20:39:09.325230  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.325236  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:09.325241  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:09.325304  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:09.352141  528764 cri.go:89] found id: ""
	I1217 20:39:09.352155  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.352162  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:09.352167  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:09.352237  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:09.383006  528764 cri.go:89] found id: ""
	I1217 20:39:09.383021  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.383028  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:09.383034  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:09.383113  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:09.414504  528764 cri.go:89] found id: ""
	I1217 20:39:09.414518  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.414526  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:09.414534  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:09.414566  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:09.483870  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:09.483889  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:09.498851  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:09.498867  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:09.569431  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:09.561559   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.562122   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.563640   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.564216   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.565635   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:09.561559   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.562122   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.563640   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.564216   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.565635   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:09.569442  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:09.569452  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:09.636946  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:09.636966  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:12.165906  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:12.176117  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:12.176184  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:12.202030  528764 cri.go:89] found id: ""
	I1217 20:39:12.202043  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.202051  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:12.202056  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:12.202111  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:12.230473  528764 cri.go:89] found id: ""
	I1217 20:39:12.230487  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.230495  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:12.230500  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:12.230559  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:12.256663  528764 cri.go:89] found id: ""
	I1217 20:39:12.256677  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.256685  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:12.256690  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:12.256747  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:12.284083  528764 cri.go:89] found id: ""
	I1217 20:39:12.284096  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.284104  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:12.284109  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:12.284168  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:12.309047  528764 cri.go:89] found id: ""
	I1217 20:39:12.309062  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.309070  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:12.309075  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:12.309134  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:12.351942  528764 cri.go:89] found id: ""
	I1217 20:39:12.351957  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.351969  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:12.351975  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:12.352034  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:12.390734  528764 cri.go:89] found id: ""
	I1217 20:39:12.390765  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.390773  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:12.390782  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:12.390793  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:12.456083  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:12.456103  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:12.471218  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:12.471239  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:12.538690  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:12.527207   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.527702   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.529370   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.529843   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.531751   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:12.527207   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.527702   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.529370   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.529843   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.531751   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:12.538707  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:12.538718  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:12.605751  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:12.605772  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:15.135835  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:15.146221  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:15.146280  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:15.176272  528764 cri.go:89] found id: ""
	I1217 20:39:15.176286  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.176294  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:15.176301  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:15.176357  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:15.206452  528764 cri.go:89] found id: ""
	I1217 20:39:15.206466  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.206474  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:15.206479  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:15.206548  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:15.231899  528764 cri.go:89] found id: ""
	I1217 20:39:15.231914  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.231921  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:15.231927  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:15.231996  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:15.257093  528764 cri.go:89] found id: ""
	I1217 20:39:15.257106  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.257113  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:15.257119  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:15.257174  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:15.281692  528764 cri.go:89] found id: ""
	I1217 20:39:15.281706  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.281714  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:15.281719  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:15.281777  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:15.310093  528764 cri.go:89] found id: ""
	I1217 20:39:15.310107  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.310114  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:15.310119  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:15.310193  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:15.349800  528764 cri.go:89] found id: ""
	I1217 20:39:15.349813  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.349830  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:15.349839  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:15.349850  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:15.426883  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:15.426904  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:15.442044  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:15.442059  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:15.512531  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:15.503665   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.504418   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.506067   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.506537   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.508077   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:15.503665   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.504418   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.506067   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.506537   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.508077   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:15.512542  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:15.512554  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:15.587396  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:15.587422  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:18.121184  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:18.131563  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:18.131644  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:18.157091  528764 cri.go:89] found id: ""
	I1217 20:39:18.157105  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.157113  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:18.157118  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:18.157177  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:18.183414  528764 cri.go:89] found id: ""
	I1217 20:39:18.183428  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.183452  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:18.183457  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:18.183523  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:18.210558  528764 cri.go:89] found id: ""
	I1217 20:39:18.210586  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.210595  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:18.210600  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:18.210667  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:18.236623  528764 cri.go:89] found id: ""
	I1217 20:39:18.236653  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.236661  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:18.236666  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:18.236730  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:18.263889  528764 cri.go:89] found id: ""
	I1217 20:39:18.263903  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.263911  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:18.263916  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:18.263977  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:18.289661  528764 cri.go:89] found id: ""
	I1217 20:39:18.289675  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.289683  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:18.289688  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:18.289743  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:18.314115  528764 cri.go:89] found id: ""
	I1217 20:39:18.314129  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.314136  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:18.314143  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:18.314165  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:18.382890  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:18.382909  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:18.425251  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:18.425268  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:18.493317  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:18.493336  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:18.509454  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:18.509470  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:18.571731  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:18.563250   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.563867   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.564819   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.566275   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.566732   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:18.563250   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.563867   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.564819   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.566275   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.566732   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:21.073445  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:21.083815  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:21.083874  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:21.113281  528764 cri.go:89] found id: ""
	I1217 20:39:21.113295  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.113302  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:21.113307  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:21.113365  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:21.142024  528764 cri.go:89] found id: ""
	I1217 20:39:21.142039  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.142046  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:21.142059  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:21.142123  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:21.170658  528764 cri.go:89] found id: ""
	I1217 20:39:21.170678  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.170686  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:21.170691  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:21.170756  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:21.196194  528764 cri.go:89] found id: ""
	I1217 20:39:21.196207  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.196214  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:21.196220  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:21.196277  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:21.222255  528764 cri.go:89] found id: ""
	I1217 20:39:21.222269  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.222276  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:21.222282  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:21.222355  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:21.247912  528764 cri.go:89] found id: ""
	I1217 20:39:21.247926  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.247933  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:21.247939  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:21.247996  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:21.278136  528764 cri.go:89] found id: ""
	I1217 20:39:21.278151  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.278158  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:21.278175  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:21.278187  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:21.346881  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:21.346899  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:21.363101  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:21.363117  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:21.431000  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:21.421965   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.422735   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.424535   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.425238   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.426896   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:21.421965   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.422735   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.424535   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.425238   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.426896   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:21.431011  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:21.431024  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:21.499494  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:21.499512  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:24.028859  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:24.039467  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:24.039528  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:24.065108  528764 cri.go:89] found id: ""
	I1217 20:39:24.065122  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.065130  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:24.065135  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:24.065193  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:24.090624  528764 cri.go:89] found id: ""
	I1217 20:39:24.090638  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.090647  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:24.090652  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:24.090710  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:24.116315  528764 cri.go:89] found id: ""
	I1217 20:39:24.116331  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.116339  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:24.116345  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:24.116414  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:24.141792  528764 cri.go:89] found id: ""
	I1217 20:39:24.141806  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.141813  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:24.141818  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:24.141877  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:24.170297  528764 cri.go:89] found id: ""
	I1217 20:39:24.170310  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.170318  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:24.170324  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:24.170378  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:24.199383  528764 cri.go:89] found id: ""
	I1217 20:39:24.199397  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.199404  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:24.199411  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:24.199477  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:24.224443  528764 cri.go:89] found id: ""
	I1217 20:39:24.224457  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.224464  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:24.224471  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:24.224496  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:24.253379  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:24.253396  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:24.322404  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:24.322423  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:24.340551  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:24.340569  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:24.409290  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:24.400697   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.401399   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.402586   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.403250   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.404963   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:24.400697   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.401399   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.402586   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.403250   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.404963   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:24.409305  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:24.409316  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:26.976820  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:26.986804  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:26.986885  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:27.015438  528764 cri.go:89] found id: ""
	I1217 20:39:27.015453  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.015460  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:27.015466  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:27.015545  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:27.041591  528764 cri.go:89] found id: ""
	I1217 20:39:27.041605  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.041613  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:27.041619  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:27.041680  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:27.066798  528764 cri.go:89] found id: ""
	I1217 20:39:27.066812  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.066819  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:27.066851  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:27.066908  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:27.091716  528764 cri.go:89] found id: ""
	I1217 20:39:27.091730  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.091737  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:27.091743  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:27.091797  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:27.116523  528764 cri.go:89] found id: ""
	I1217 20:39:27.116536  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.116544  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:27.116550  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:27.116612  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:27.140982  528764 cri.go:89] found id: ""
	I1217 20:39:27.140996  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.141004  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:27.141009  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:27.141064  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:27.170754  528764 cri.go:89] found id: ""
	I1217 20:39:27.170769  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.170777  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:27.170784  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:27.170805  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:27.234403  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:27.226295   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.226681   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.228247   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.228832   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.230386   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:27.226295   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.226681   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.228247   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.228832   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.230386   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:27.234413  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:27.234463  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:27.306551  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:27.306570  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:27.342575  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:27.342597  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:27.416305  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:27.416325  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:29.931568  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:29.941696  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:29.941790  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:29.970561  528764 cri.go:89] found id: ""
	I1217 20:39:29.970576  528764 logs.go:282] 0 containers: []
	W1217 20:39:29.970583  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:29.970588  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:29.970644  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:29.995538  528764 cri.go:89] found id: ""
	I1217 20:39:29.995551  528764 logs.go:282] 0 containers: []
	W1217 20:39:29.995559  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:29.995564  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:29.995645  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:30.047472  528764 cri.go:89] found id: ""
	I1217 20:39:30.047487  528764 logs.go:282] 0 containers: []
	W1217 20:39:30.047496  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:30.047501  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:30.047568  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:30.077580  528764 cri.go:89] found id: ""
	I1217 20:39:30.077595  528764 logs.go:282] 0 containers: []
	W1217 20:39:30.077603  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:30.077609  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:30.077686  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:30.111544  528764 cri.go:89] found id: ""
	I1217 20:39:30.111574  528764 logs.go:282] 0 containers: []
	W1217 20:39:30.111618  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:30.111624  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:30.111705  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:30.139478  528764 cri.go:89] found id: ""
	I1217 20:39:30.139504  528764 logs.go:282] 0 containers: []
	W1217 20:39:30.139513  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:30.139518  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:30.139611  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:30.169107  528764 cri.go:89] found id: ""
	I1217 20:39:30.169121  528764 logs.go:282] 0 containers: []
	W1217 20:39:30.169128  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:30.169136  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:30.169146  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:30.234963  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:30.234982  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:30.250550  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:30.250577  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:30.320870  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:30.310827   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.311705   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.313455   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.314025   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.315795   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:30.310827   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.311705   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.313455   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.314025   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.315795   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:30.320884  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:30.320894  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:30.397776  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:30.397796  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:32.932751  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:32.942813  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:32.942885  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:32.968405  528764 cri.go:89] found id: ""
	I1217 20:39:32.968418  528764 logs.go:282] 0 containers: []
	W1217 20:39:32.968425  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:32.968431  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:32.968503  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:32.991973  528764 cri.go:89] found id: ""
	I1217 20:39:32.991987  528764 logs.go:282] 0 containers: []
	W1217 20:39:32.991994  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:32.992005  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:32.992063  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:33.019478  528764 cri.go:89] found id: ""
	I1217 20:39:33.019492  528764 logs.go:282] 0 containers: []
	W1217 20:39:33.019500  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:33.019505  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:33.019572  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:33.044942  528764 cri.go:89] found id: ""
	I1217 20:39:33.044958  528764 logs.go:282] 0 containers: []
	W1217 20:39:33.044965  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:33.044970  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:33.045028  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:33.072242  528764 cri.go:89] found id: ""
	I1217 20:39:33.072256  528764 logs.go:282] 0 containers: []
	W1217 20:39:33.072263  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:33.072268  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:33.072332  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:33.101598  528764 cri.go:89] found id: ""
	I1217 20:39:33.101611  528764 logs.go:282] 0 containers: []
	W1217 20:39:33.101619  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:33.101624  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:33.101677  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:33.127765  528764 cri.go:89] found id: ""
	I1217 20:39:33.127780  528764 logs.go:282] 0 containers: []
	W1217 20:39:33.127805  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:33.127813  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:33.127830  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:33.193505  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:33.193524  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:33.209404  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:33.209419  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:33.278213  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:33.269512   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.270341   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.272086   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.272605   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.274151   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:33.269512   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.270341   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.272086   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.272605   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.274151   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:33.278224  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:33.278234  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:33.352890  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:33.352911  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:35.892717  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:35.902865  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:35.902923  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:35.927963  528764 cri.go:89] found id: ""
	I1217 20:39:35.927977  528764 logs.go:282] 0 containers: []
	W1217 20:39:35.927985  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:35.927990  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:35.928047  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:35.953995  528764 cri.go:89] found id: ""
	I1217 20:39:35.954010  528764 logs.go:282] 0 containers: []
	W1217 20:39:35.954017  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:35.954022  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:35.954078  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:35.978944  528764 cri.go:89] found id: ""
	I1217 20:39:35.978958  528764 logs.go:282] 0 containers: []
	W1217 20:39:35.978965  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:35.978971  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:35.979027  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:36.009908  528764 cri.go:89] found id: ""
	I1217 20:39:36.009923  528764 logs.go:282] 0 containers: []
	W1217 20:39:36.009932  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:36.009938  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:36.010005  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:36.036093  528764 cri.go:89] found id: ""
	I1217 20:39:36.036106  528764 logs.go:282] 0 containers: []
	W1217 20:39:36.036114  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:36.036125  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:36.036189  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:36.064858  528764 cri.go:89] found id: ""
	I1217 20:39:36.064873  528764 logs.go:282] 0 containers: []
	W1217 20:39:36.064880  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:36.064888  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:36.064943  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:36.091213  528764 cri.go:89] found id: ""
	I1217 20:39:36.091228  528764 logs.go:282] 0 containers: []
	W1217 20:39:36.091236  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:36.091243  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:36.091265  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:36.123131  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:36.123147  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:36.192190  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:36.192209  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:36.207423  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:36.207441  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:36.274672  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:36.265622   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.266359   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.267351   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.268947   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.269621   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:36.265622   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.266359   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.267351   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.268947   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.269621   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:36.274682  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:36.274693  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:38.848137  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:38.858186  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:38.858245  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:38.887476  528764 cri.go:89] found id: ""
	I1217 20:39:38.887491  528764 logs.go:282] 0 containers: []
	W1217 20:39:38.887498  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:38.887503  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:38.887559  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:38.913669  528764 cri.go:89] found id: ""
	I1217 20:39:38.913683  528764 logs.go:282] 0 containers: []
	W1217 20:39:38.913691  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:38.913696  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:38.913753  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:38.938922  528764 cri.go:89] found id: ""
	I1217 20:39:38.938937  528764 logs.go:282] 0 containers: []
	W1217 20:39:38.938945  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:38.938950  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:38.939010  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:38.964782  528764 cri.go:89] found id: ""
	I1217 20:39:38.964796  528764 logs.go:282] 0 containers: []
	W1217 20:39:38.964804  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:38.964809  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:38.964869  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:38.990990  528764 cri.go:89] found id: ""
	I1217 20:39:38.991004  528764 logs.go:282] 0 containers: []
	W1217 20:39:38.991012  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:38.991017  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:38.991087  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:39.019624  528764 cri.go:89] found id: ""
	I1217 20:39:39.019638  528764 logs.go:282] 0 containers: []
	W1217 20:39:39.019645  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:39.019651  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:39.019712  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:39.049943  528764 cri.go:89] found id: ""
	I1217 20:39:39.049957  528764 logs.go:282] 0 containers: []
	W1217 20:39:39.049964  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:39.049971  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:39.049982  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:39.114679  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:39.114699  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:39.129526  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:39.129544  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:39.192131  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:39.184273   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.185000   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.186617   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.186938   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.188434   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:39.184273   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.185000   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.186617   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.186938   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.188434   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:39.192141  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:39.192151  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:39.262829  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:39.262849  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:41.796129  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:41.805988  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:41.806050  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:41.830659  528764 cri.go:89] found id: ""
	I1217 20:39:41.830688  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.830696  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:41.830702  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:41.830772  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:41.855846  528764 cri.go:89] found id: ""
	I1217 20:39:41.855861  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.855868  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:41.855874  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:41.855937  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:41.880126  528764 cri.go:89] found id: ""
	I1217 20:39:41.880139  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.880147  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:41.880151  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:41.880205  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:41.909006  528764 cri.go:89] found id: ""
	I1217 20:39:41.909020  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.909027  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:41.909032  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:41.909088  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:41.938559  528764 cri.go:89] found id: ""
	I1217 20:39:41.938573  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.938580  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:41.938585  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:41.938646  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:41.966291  528764 cri.go:89] found id: ""
	I1217 20:39:41.966305  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.966312  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:41.966317  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:41.966380  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:41.991150  528764 cri.go:89] found id: ""
	I1217 20:39:41.991164  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.991172  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:41.991180  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:41.991190  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:42.024918  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:42.024936  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:42.094047  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:42.094069  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:42.113717  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:42.113737  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:42.191163  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:42.180141   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.180682   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.183783   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.184295   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.186218   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:42.180141   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.180682   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.183783   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.184295   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.186218   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:42.191176  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:42.191195  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:44.772767  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:44.783138  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:44.783204  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:44.811282  528764 cri.go:89] found id: ""
	I1217 20:39:44.811296  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.811304  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:44.811309  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:44.811369  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:44.838690  528764 cri.go:89] found id: ""
	I1217 20:39:44.838704  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.838711  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:44.838717  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:44.838776  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:44.866668  528764 cri.go:89] found id: ""
	I1217 20:39:44.866683  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.866690  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:44.866696  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:44.866751  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:44.892383  528764 cri.go:89] found id: ""
	I1217 20:39:44.892397  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.892405  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:44.892410  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:44.892468  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:44.921797  528764 cri.go:89] found id: ""
	I1217 20:39:44.921812  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.921819  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:44.921825  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:44.921885  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:44.947362  528764 cri.go:89] found id: ""
	I1217 20:39:44.947376  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.947384  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:44.947389  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:44.947446  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:44.974284  528764 cri.go:89] found id: ""
	I1217 20:39:44.974297  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.974305  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:44.974312  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:44.974323  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:45.077487  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:45.067021   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.068124   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.068971   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.071090   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.072380   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:45.067021   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.068124   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.068971   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.071090   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.072380   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:45.077499  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:45.077511  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:45.185472  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:45.185499  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:45.244734  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:45.244753  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:45.320383  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:45.320403  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:47.839254  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:47.849450  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:47.849509  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:47.878517  528764 cri.go:89] found id: ""
	I1217 20:39:47.878531  528764 logs.go:282] 0 containers: []
	W1217 20:39:47.878539  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:47.878554  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:47.878612  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:47.904739  528764 cri.go:89] found id: ""
	I1217 20:39:47.904754  528764 logs.go:282] 0 containers: []
	W1217 20:39:47.904762  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:47.904767  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:47.904823  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:47.929572  528764 cri.go:89] found id: ""
	I1217 20:39:47.929586  528764 logs.go:282] 0 containers: []
	W1217 20:39:47.929593  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:47.929599  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:47.929658  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:47.958617  528764 cri.go:89] found id: ""
	I1217 20:39:47.958631  528764 logs.go:282] 0 containers: []
	W1217 20:39:47.958639  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:47.958644  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:47.958701  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:47.984420  528764 cri.go:89] found id: ""
	I1217 20:39:47.984434  528764 logs.go:282] 0 containers: []
	W1217 20:39:47.984441  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:47.984447  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:47.984504  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:48.013373  528764 cri.go:89] found id: ""
	I1217 20:39:48.013389  528764 logs.go:282] 0 containers: []
	W1217 20:39:48.013396  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:48.013402  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:48.013461  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:48.040700  528764 cri.go:89] found id: ""
	I1217 20:39:48.040713  528764 logs.go:282] 0 containers: []
	W1217 20:39:48.040720  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:48.040728  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:48.040740  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:48.112503  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:48.112522  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:48.148498  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:48.148514  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:48.215575  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:48.215644  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:48.230769  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:48.230785  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:48.305622  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:48.297446   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.298244   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.299754   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.300303   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.301821   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:48.297446   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.298244   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.299754   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.300303   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.301821   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:50.807281  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:50.819012  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:50.819075  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:50.845131  528764 cri.go:89] found id: ""
	I1217 20:39:50.845145  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.845153  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:50.845158  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:50.845215  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:50.878758  528764 cri.go:89] found id: ""
	I1217 20:39:50.878771  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.878778  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:50.878783  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:50.878851  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:50.905139  528764 cri.go:89] found id: ""
	I1217 20:39:50.905154  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.905161  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:50.905167  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:50.905234  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:50.930885  528764 cri.go:89] found id: ""
	I1217 20:39:50.930898  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.930923  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:50.930928  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:50.931004  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:50.961249  528764 cri.go:89] found id: ""
	I1217 20:39:50.961264  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.961271  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:50.961281  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:50.961339  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:50.990268  528764 cri.go:89] found id: ""
	I1217 20:39:50.990283  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.990290  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:50.990305  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:50.990368  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:51.022220  528764 cri.go:89] found id: ""
	I1217 20:39:51.022235  528764 logs.go:282] 0 containers: []
	W1217 20:39:51.022253  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:51.022260  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:51.022272  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:51.037279  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:51.037301  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:51.104091  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:51.095357   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.096330   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.098113   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.098463   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.100108   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:51.095357   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.096330   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.098113   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.098463   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.100108   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:51.104101  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:51.104112  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:51.170651  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:51.170674  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:51.200399  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:51.200421  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:53.770767  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:53.780793  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:53.780851  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:53.809348  528764 cri.go:89] found id: ""
	I1217 20:39:53.809362  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.809370  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:53.809375  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:53.809441  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:53.834689  528764 cri.go:89] found id: ""
	I1217 20:39:53.834703  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.834710  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:53.834716  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:53.834772  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:53.861465  528764 cri.go:89] found id: ""
	I1217 20:39:53.861483  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.861491  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:53.861498  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:53.861562  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:53.891732  528764 cri.go:89] found id: ""
	I1217 20:39:53.891747  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.891754  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:53.891759  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:53.891817  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:53.917938  528764 cri.go:89] found id: ""
	I1217 20:39:53.917952  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.917959  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:53.917964  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:53.918024  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:53.943397  528764 cri.go:89] found id: ""
	I1217 20:39:53.943412  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.943420  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:53.943431  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:53.943500  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:53.970499  528764 cri.go:89] found id: ""
	I1217 20:39:53.970514  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.970521  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:53.970529  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:53.970540  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:54.037615  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:54.028803   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.030066   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.030771   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.032113   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.032669   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:54.028803   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.030066   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.030771   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.032113   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.032669   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:54.037625  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:54.037637  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:54.105683  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:54.105702  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:54.135408  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:54.135424  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:54.201915  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:54.201934  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:56.717571  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:56.727576  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:56.727663  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:56.752566  528764 cri.go:89] found id: ""
	I1217 20:39:56.752580  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.752587  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:56.752593  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:56.752649  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:56.778100  528764 cri.go:89] found id: ""
	I1217 20:39:56.778114  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.778123  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:56.778128  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:56.778188  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:56.810564  528764 cri.go:89] found id: ""
	I1217 20:39:56.810578  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.810585  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:56.810590  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:56.810651  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:56.836110  528764 cri.go:89] found id: ""
	I1217 20:39:56.836123  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.836130  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:56.836136  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:56.836192  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:56.860819  528764 cri.go:89] found id: ""
	I1217 20:39:56.860833  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.860840  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:56.860845  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:56.860910  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:56.885378  528764 cri.go:89] found id: ""
	I1217 20:39:56.885392  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.885400  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:56.885405  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:56.885464  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:56.910636  528764 cri.go:89] found id: ""
	I1217 20:39:56.910649  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.910657  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:56.910664  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:56.910685  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:56.975973  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:56.975994  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:56.990897  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:56.990913  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:57.059420  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:57.050613   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.051527   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.053283   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.053833   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.055383   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:57.050613   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.051527   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.053283   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.053833   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.055383   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:57.059434  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:57.059444  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:57.127559  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:57.127588  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:59.660834  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:59.671347  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:59.671409  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:59.697317  528764 cri.go:89] found id: ""
	I1217 20:39:59.697331  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.697338  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:59.697344  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:59.697400  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:59.721571  528764 cri.go:89] found id: ""
	I1217 20:39:59.721586  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.721593  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:59.721601  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:59.721663  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:59.746819  528764 cri.go:89] found id: ""
	I1217 20:39:59.746835  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.746843  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:59.746849  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:59.746909  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:59.773034  528764 cri.go:89] found id: ""
	I1217 20:39:59.773049  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.773057  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:59.773062  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:59.773123  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:59.802418  528764 cri.go:89] found id: ""
	I1217 20:39:59.802441  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.802449  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:59.802454  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:59.802524  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:59.831711  528764 cri.go:89] found id: ""
	I1217 20:39:59.831725  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.831733  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:59.831739  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:59.831804  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:59.856953  528764 cri.go:89] found id: ""
	I1217 20:39:59.856967  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.856975  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:59.856982  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:59.856995  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:59.884897  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:59.884914  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:59.949655  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:59.949677  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:59.964501  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:59.964517  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:40:00.094107  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:40:00.057057   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.058079   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.081049   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.085379   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.086028   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:40:00.057057   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.058079   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.081049   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.085379   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.086028   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:40:00.094120  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:40:00.094132  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:40:02.787739  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:40:02.797830  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:40:02.797894  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:40:02.834082  528764 cri.go:89] found id: ""
	I1217 20:40:02.834096  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.834104  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:40:02.834109  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:40:02.834168  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:40:02.866743  528764 cri.go:89] found id: ""
	I1217 20:40:02.866756  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.866763  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:40:02.866768  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:40:02.866837  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:40:02.895045  528764 cri.go:89] found id: ""
	I1217 20:40:02.895058  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.895066  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:40:02.895071  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:40:02.895126  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:40:02.921557  528764 cri.go:89] found id: ""
	I1217 20:40:02.921570  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.921580  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:40:02.921585  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:40:02.921641  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:40:02.952647  528764 cri.go:89] found id: ""
	I1217 20:40:02.952661  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.952669  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:40:02.952675  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:40:02.952733  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:40:02.983298  528764 cri.go:89] found id: ""
	I1217 20:40:02.983312  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.983319  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:40:02.983325  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:40:02.983389  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:40:03.010550  528764 cri.go:89] found id: ""
	I1217 20:40:03.010565  528764 logs.go:282] 0 containers: []
	W1217 20:40:03.010573  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:40:03.010581  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:40:03.010592  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:40:03.079310  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:40:03.079329  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:40:03.094479  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:40:03.094497  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:40:03.161221  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:40:03.151895   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.152622   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.154348   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.155044   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.156824   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:40:03.151895   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.152622   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.154348   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.155044   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.156824   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:40:03.161231  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:40:03.161242  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:40:03.227816  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:40:03.227835  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:40:05.757487  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:40:05.767711  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:40:05.767773  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:40:05.793946  528764 cri.go:89] found id: ""
	I1217 20:40:05.793960  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.793972  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:40:05.793978  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:40:05.794036  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:40:05.822285  528764 cri.go:89] found id: ""
	I1217 20:40:05.822299  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.822306  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:40:05.822314  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:40:05.822371  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:40:05.850250  528764 cri.go:89] found id: ""
	I1217 20:40:05.850264  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.850271  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:40:05.850277  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:40:05.850335  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:40:05.895396  528764 cri.go:89] found id: ""
	I1217 20:40:05.895410  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.895417  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:40:05.895422  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:40:05.895477  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:40:05.922557  528764 cri.go:89] found id: ""
	I1217 20:40:05.922571  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.922580  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:40:05.922586  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:40:05.922644  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:40:05.948573  528764 cri.go:89] found id: ""
	I1217 20:40:05.948586  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.948594  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:40:05.948599  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:40:05.948655  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:40:05.975477  528764 cri.go:89] found id: ""
	I1217 20:40:05.975492  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.975499  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:40:05.975507  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:40:05.975518  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:40:06.041819  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:40:06.041840  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:40:06.056861  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:40:06.056877  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:40:06.121776  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:40:06.113002   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.113953   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.115560   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.115989   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.117538   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:40:06.113002   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.113953   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.115560   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.115989   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.117538   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:40:06.121787  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:40:06.121799  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:40:06.189149  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:40:06.189168  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:40:08.726723  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:40:08.736543  528764 kubeadm.go:602] duration metric: took 4m2.922502769s to restartPrimaryControlPlane
	W1217 20:40:08.736595  528764 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 20:40:08.736673  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 20:40:09.144455  528764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:40:09.157270  528764 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:40:09.165045  528764 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:40:09.165097  528764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:40:09.172944  528764 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:40:09.172955  528764 kubeadm.go:158] found existing configuration files:
	
	I1217 20:40:09.173008  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 20:40:09.180768  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:40:09.180823  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:40:09.188593  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 20:40:09.196627  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:40:09.196696  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:40:09.204027  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 20:40:09.211590  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:40:09.211645  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:40:09.219300  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 20:40:09.227194  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:40:09.227262  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:40:09.234747  528764 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:40:09.272070  528764 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 20:40:09.272212  528764 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:40:09.341132  528764 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:40:09.341223  528764 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 20:40:09.341264  528764 kubeadm.go:319] OS: Linux
	I1217 20:40:09.341317  528764 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:40:09.341383  528764 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 20:40:09.341441  528764 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:40:09.341494  528764 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:40:09.341544  528764 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:40:09.341595  528764 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:40:09.341642  528764 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:40:09.341697  528764 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:40:09.341746  528764 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 20:40:09.410099  528764 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:40:09.410202  528764 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:40:09.410291  528764 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:40:09.420776  528764 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:40:09.424281  528764 out.go:252]   - Generating certificates and keys ...
	I1217 20:40:09.424384  528764 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:40:09.424470  528764 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:40:09.424574  528764 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 20:40:09.424647  528764 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 20:40:09.424730  528764 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 20:40:09.424800  528764 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 20:40:09.424875  528764 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 20:40:09.424947  528764 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 20:40:09.425042  528764 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 20:40:09.425124  528764 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 20:40:09.425164  528764 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 20:40:09.425224  528764 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:40:09.510914  528764 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:40:09.769116  528764 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:40:10.300117  528764 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:40:10.525653  528764 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:40:10.613609  528764 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:40:10.614221  528764 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:40:10.616799  528764 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 20:40:10.619993  528764 out.go:252]   - Booting up control plane ...
	I1217 20:40:10.620096  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:40:10.620217  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:40:10.620290  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:40:10.635322  528764 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:40:10.635439  528764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:40:10.644820  528764 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:40:10.645930  528764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:40:10.645984  528764 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:40:10.779996  528764 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:40:10.780110  528764 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:44:10.781176  528764 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001248714s
	I1217 20:44:10.781203  528764 kubeadm.go:319] 
	I1217 20:44:10.781260  528764 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 20:44:10.781303  528764 kubeadm.go:319] 	- The kubelet is not running
	I1217 20:44:10.781406  528764 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 20:44:10.781411  528764 kubeadm.go:319] 
	I1217 20:44:10.781555  528764 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 20:44:10.781602  528764 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 20:44:10.781633  528764 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 20:44:10.781637  528764 kubeadm.go:319] 
	I1217 20:44:10.786300  528764 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 20:44:10.786712  528764 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 20:44:10.786818  528764 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 20:44:10.787052  528764 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 20:44:10.787056  528764 kubeadm.go:319] 
	I1217 20:44:10.787124  528764 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 20:44:10.787237  528764 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001248714s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 20:44:10.787339  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 20:44:11.201167  528764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:44:11.214381  528764 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:44:11.214439  528764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:44:11.222598  528764 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:44:11.222610  528764 kubeadm.go:158] found existing configuration files:
	
	I1217 20:44:11.222661  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 20:44:11.230419  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:44:11.230478  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:44:11.238159  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 20:44:11.246406  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:44:11.246462  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:44:11.254307  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 20:44:11.262104  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:44:11.262159  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:44:11.270202  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 20:44:11.278439  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:44:11.278497  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:44:11.286143  528764 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:44:11.330597  528764 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 20:44:11.330648  528764 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:44:11.407432  528764 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:44:11.407494  528764 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 20:44:11.407526  528764 kubeadm.go:319] OS: Linux
	I1217 20:44:11.407568  528764 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:44:11.407631  528764 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 20:44:11.407675  528764 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:44:11.407720  528764 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:44:11.407764  528764 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:44:11.407809  528764 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:44:11.407851  528764 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:44:11.407896  528764 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:44:11.407938  528764 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 20:44:11.479750  528764 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:44:11.479854  528764 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:44:11.479945  528764 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:44:11.492072  528764 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:44:11.494989  528764 out.go:252]   - Generating certificates and keys ...
	I1217 20:44:11.495078  528764 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:44:11.495152  528764 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:44:11.495231  528764 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 20:44:11.495312  528764 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 20:44:11.495394  528764 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 20:44:11.495452  528764 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 20:44:11.495526  528764 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 20:44:11.495616  528764 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 20:44:11.495700  528764 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 20:44:11.495778  528764 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 20:44:11.495818  528764 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 20:44:11.495877  528764 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:44:11.718879  528764 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:44:11.913718  528764 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:44:12.104953  528764 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:44:12.214740  528764 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:44:13.078100  528764 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:44:13.078681  528764 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:44:13.081470  528764 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 20:44:13.086841  528764 out.go:252]   - Booting up control plane ...
	I1217 20:44:13.086964  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:44:13.087047  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:44:13.087115  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:44:13.101223  528764 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:44:13.101325  528764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:44:13.108618  528764 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:44:13.108874  528764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:44:13.109039  528764 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:44:13.243147  528764 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:44:13.243267  528764 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:48:13.243345  528764 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000238438s
	I1217 20:48:13.243376  528764 kubeadm.go:319] 
	I1217 20:48:13.243430  528764 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 20:48:13.243460  528764 kubeadm.go:319] 	- The kubelet is not running
	I1217 20:48:13.243558  528764 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 20:48:13.243562  528764 kubeadm.go:319] 
	I1217 20:48:13.243678  528764 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 20:48:13.243708  528764 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 20:48:13.243736  528764 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 20:48:13.243739  528764 kubeadm.go:319] 
	I1217 20:48:13.247539  528764 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 20:48:13.247985  528764 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 20:48:13.248095  528764 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 20:48:13.248338  528764 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 20:48:13.248343  528764 kubeadm.go:319] 
	I1217 20:48:13.248416  528764 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 20:48:13.248469  528764 kubeadm.go:403] duration metric: took 12m7.468824114s to StartCluster
	I1217 20:48:13.248499  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:48:13.248560  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:48:13.273652  528764 cri.go:89] found id: ""
	I1217 20:48:13.273665  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.273672  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:48:13.273677  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:48:13.273743  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:48:13.299758  528764 cri.go:89] found id: ""
	I1217 20:48:13.299773  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.299780  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:48:13.299787  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:48:13.299849  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:48:13.331514  528764 cri.go:89] found id: ""
	I1217 20:48:13.331527  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.331534  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:48:13.331538  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:48:13.331632  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:48:13.361494  528764 cri.go:89] found id: ""
	I1217 20:48:13.361508  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.361515  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:48:13.361520  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:48:13.361583  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:48:13.392361  528764 cri.go:89] found id: ""
	I1217 20:48:13.392374  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.392382  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:48:13.392387  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:48:13.392445  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:48:13.420567  528764 cri.go:89] found id: ""
	I1217 20:48:13.420581  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.420589  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:48:13.420594  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:48:13.420652  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:48:13.446072  528764 cri.go:89] found id: ""
	I1217 20:48:13.446086  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.446093  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:48:13.446102  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:48:13.446112  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:48:13.512293  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:48:13.512314  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:48:13.527934  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:48:13.527951  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:48:13.596728  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:48:13.587277   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.587931   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.589795   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.590404   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.592261   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:48:13.587277   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.587931   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.589795   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.590404   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.592261   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:48:13.596751  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:48:13.596762  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:48:13.666834  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:48:13.666852  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 20:48:13.697763  528764 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238438s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 20:48:13.697796  528764 out.go:285] * 
	W1217 20:48:13.697859  528764 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238438s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 20:48:13.697876  528764 out.go:285] * 
	W1217 20:48:13.700016  528764 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:48:13.704929  528764 out.go:203] 
	W1217 20:48:13.708733  528764 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238438s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 20:48:13.708785  528764 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 20:48:13.708804  528764 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 20:48:13.713576  528764 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496553819Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496588913Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496641484Z" level=info msg="Create NRI interface"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496756307Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496765161Z" level=info msg="runtime interface created"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496787586Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496795537Z" level=info msg="runtime interface starting up..."
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496804792Z" level=info msg="starting plugins..."
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496818503Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496896764Z" level=info msg="No systemd watchdog enabled"
	Dec 17 20:36:04 functional-655452 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.415834383Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=58f6f0f1-488b-4240-a679-3e157f00d7e7 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.416590837Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=05b425cc-49a9-416d-8e00-62945047df36 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.417323538Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=a9a38e6d-b290-413f-a93f-cf194783972f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.417962945Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=bdf79a37-e5ac-441d-baa9-990efb2af86f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.418404377Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=f29ade00-2b87-48af-a8d1-af1f70d12fc1 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.418943992Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=aa01ccac-5dc1-42c2-9b96-b5307aedf908 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.419435131Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=3071c5cb-d2e8-40e4-bf26-10cfdb83c6ca name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.483168755Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=116885b2-e96e-48a5-8c7d-749c0bd3c872 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.484179432Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=a7b99d88-fbbf-4485-ad77-1f09bb11e283 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.484714555Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=1a3a48a9-47e1-4681-9a10-70d7c5e85de2 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.48529777Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=48ecbe50-05dc-4736-8a4c-23a7b8f0b752 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.485817657Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=13bf3d26-ab2e-4773-bb7e-3fc288ba3714 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.486350122Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=3ebf0c9f-0c46-4d67-8924-03dd39ad4399 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.486847969Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=e3deb8c8-e04b-4949-9c80-5a8e5a9b5bee name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:50:13.023980   23326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:50:13.024616   23326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:50:13.026379   23326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:50:13.026866   23326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:50:13.028343   23326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 19:02] hrtimer: interrupt took 71042327 ns
	[Dec17 20:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 20:12] overlayfs: idmapped layers are currently not supported
	[  +0.080288] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec17 20:17] overlayfs: idmapped layers are currently not supported
	[Dec17 20:18] overlayfs: idmapped layers are currently not supported
	[Dec17 20:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:50:13 up  3:32,  0 user,  load average: 0.96, 0.38, 0.55
	Linux functional-655452 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 20:50:10 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:50:11 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1118.
	Dec 17 20:50:11 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:50:11 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:50:11 functional-655452 kubelet[23175]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:50:11 functional-655452 kubelet[23175]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:50:11 functional-655452 kubelet[23175]: E1217 20:50:11.118894   23175 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:50:11 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:50:11 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:50:11 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1119.
	Dec 17 20:50:11 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:50:11 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:50:11 functional-655452 kubelet[23221]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:50:11 functional-655452 kubelet[23221]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:50:11 functional-655452 kubelet[23221]: E1217 20:50:11.822225   23221 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:50:11 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:50:11 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:50:12 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1120.
	Dec 17 20:50:12 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:50:12 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:50:12 functional-655452 kubelet[23244]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:50:12 functional-655452 kubelet[23244]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:50:12 functional-655452 kubelet[23244]: E1217 20:50:12.642677   23244 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:50:12 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:50:12 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452: exit status 2 (346.370284ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-655452" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (3.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (2.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-655452 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-655452 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (51.646792ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-655452 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-655452 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-655452 describe po hello-node-connect: exit status 1 (58.823488ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-655452 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-655452 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-655452 logs -l app=hello-node-connect: exit status 1 (58.498339ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-655452 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-655452 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-655452 describe svc hello-node-connect: exit status 1 (59.80038ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-655452 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-655452
helpers_test.go:244: (dbg) docker inspect functional-655452:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	        "Created": "2025-12-17T20:21:23.127111799Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 517339,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:21:23.205943598Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hosts",
	        "LogPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f-json.log",
	        "Name": "/functional-655452",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-655452:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-655452",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	                "LowerDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8-init/diff:/var/lib/docker/overlay2/c4759b344ecb109c83d66e4ef56c76903f1d1e597efb86ab2d2c7911b1130a8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-655452",
	                "Source": "/var/lib/docker/volumes/functional-655452/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-655452",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-655452",
	                "name.minikube.sigs.k8s.io": "functional-655452",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "77ae7dbcf69b3201be4ce65a88d235dbad078142318e01a5939425f9766fb924",
	            "SandboxKey": "/var/run/docker/netns/77ae7dbcf69b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-655452": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:c6:98:b5:58:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b68cd6e69ccdb81b0870dd976e2d5401ca696480842d2c469293121d59435cfb",
	                    "EndpointID": "82926bb4b08b5f66131c1026a8bc72a582e9cb8c6d97a278a494965923f47cb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-655452",
	                        "ce7a6a54d14f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452: exit status 2 (291.158971ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-655452 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │                     │
	│ cache   │ functional-655452 cache reload                                                                                                                               │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ ssh     │ functional-655452 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │ 17 Dec 25 20:35 UTC │
	│ kubectl │ functional-655452 kubectl -- --context functional-655452 get pods                                                                                            │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:35 UTC │                     │
	│ start   │ -p functional-655452 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:36 UTC │                     │
	│ config  │ functional-655452 config unset cpus                                                                                                                          │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:48 UTC │ 17 Dec 25 20:48 UTC │
	│ config  │ functional-655452 config get cpus                                                                                                                            │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:48 UTC │                     │
	│ config  │ functional-655452 config set cpus 2                                                                                                                          │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:48 UTC │ 17 Dec 25 20:48 UTC │
	│ config  │ functional-655452 config get cpus                                                                                                                            │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:48 UTC │ 17 Dec 25 20:48 UTC │
	│ config  │ functional-655452 config unset cpus                                                                                                                          │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:48 UTC │ 17 Dec 25 20:48 UTC │
	│ ssh     │ functional-655452 ssh -n functional-655452 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:48 UTC │ 17 Dec 25 20:48 UTC │
	│ config  │ functional-655452 config get cpus                                                                                                                            │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:48 UTC │                     │
	│ ssh     │ functional-655452 ssh echo hello                                                                                                                             │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:48 UTC │ 17 Dec 25 20:48 UTC │
	│ cp      │ functional-655452 cp functional-655452:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm2932922172/001/cp-test.txt │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:48 UTC │ 17 Dec 25 20:48 UTC │
	│ ssh     │ functional-655452 ssh cat /etc/hostname                                                                                                                      │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:48 UTC │ 17 Dec 25 20:48 UTC │
	│ ssh     │ functional-655452 ssh -n functional-655452 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:48 UTC │ 17 Dec 25 20:48 UTC │
	│ tunnel  │ functional-655452 tunnel --alsologtostderr                                                                                                                   │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:48 UTC │                     │
	│ tunnel  │ functional-655452 tunnel --alsologtostderr                                                                                                                   │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:48 UTC │                     │
	│ cp      │ functional-655452 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:48 UTC │ 17 Dec 25 20:48 UTC │
	│ tunnel  │ functional-655452 tunnel --alsologtostderr                                                                                                                   │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:48 UTC │                     │
	│ ssh     │ functional-655452 ssh -n functional-655452 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:48 UTC │ 17 Dec 25 20:48 UTC │
	│ addons  │ functional-655452 addons list                                                                                                                                │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:49 UTC │ 17 Dec 25 20:49 UTC │
	│ addons  │ functional-655452 addons list -o json                                                                                                                        │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:49 UTC │ 17 Dec 25 20:49 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:36:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:36:01.304180  528764 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:36:01.304299  528764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:36:01.304303  528764 out.go:374] Setting ErrFile to fd 2...
	I1217 20:36:01.304307  528764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:36:01.304548  528764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:36:01.304941  528764 out.go:368] Setting JSON to false
	I1217 20:36:01.305793  528764 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11911,"bootTime":1765991851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:36:01.305860  528764 start.go:143] virtualization:  
	I1217 20:36:01.309940  528764 out.go:179] * [functional-655452] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:36:01.313178  528764 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:36:01.313261  528764 notify.go:221] Checking for updates...
	I1217 20:36:01.319276  528764 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:36:01.322533  528764 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:36:01.325481  528764 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:36:01.328332  528764 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:36:01.331257  528764 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:36:01.334638  528764 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:36:01.334735  528764 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:36:01.377324  528764 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:36:01.377436  528764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:36:01.442821  528764 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 20:36:01.432767342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:36:01.442911  528764 docker.go:319] overlay module found
	I1217 20:36:01.446093  528764 out.go:179] * Using the docker driver based on existing profile
	I1217 20:36:01.448835  528764 start.go:309] selected driver: docker
	I1217 20:36:01.448847  528764 start.go:927] validating driver "docker" against &{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:36:01.448948  528764 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:36:01.449055  528764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:36:01.502893  528764 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 20:36:01.493096577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:36:01.503296  528764 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:36:01.503325  528764 cni.go:84] Creating CNI manager for ""
	I1217 20:36:01.503373  528764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:36:01.503423  528764 start.go:353] cluster config:
	{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:36:01.506646  528764 out.go:179] * Starting "functional-655452" primary control-plane node in "functional-655452" cluster
	I1217 20:36:01.509580  528764 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:36:01.512594  528764 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:36:01.515481  528764 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:36:01.515521  528764 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1217 20:36:01.515533  528764 cache.go:65] Caching tarball of preloaded images
	I1217 20:36:01.515555  528764 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:36:01.515635  528764 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:36:01.515645  528764 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 20:36:01.515757  528764 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/config.json ...
	I1217 20:36:01.536964  528764 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:36:01.536994  528764 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:36:01.537012  528764 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:36:01.537046  528764 start.go:360] acquireMachinesLock for functional-655452: {Name:mk8b480106227149e1b79da3c4f580b9dc5e0f96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:36:01.537100  528764 start.go:364] duration metric: took 37.99µs to acquireMachinesLock for "functional-655452"
	I1217 20:36:01.537118  528764 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:36:01.537122  528764 fix.go:54] fixHost starting: 
	I1217 20:36:01.537383  528764 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
	I1217 20:36:01.554557  528764 fix.go:112] recreateIfNeeded on functional-655452: state=Running err=<nil>
	W1217 20:36:01.554578  528764 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:36:01.557934  528764 out.go:252] * Updating the running docker "functional-655452" container ...
	I1217 20:36:01.557966  528764 machine.go:94] provisionDockerMachine start ...
	I1217 20:36:01.558073  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:01.576191  528764 main.go:143] libmachine: Using SSH client type: native
	I1217 20:36:01.576509  528764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:36:01.576515  528764 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:36:01.707478  528764 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-655452
	
	I1217 20:36:01.707493  528764 ubuntu.go:182] provisioning hostname "functional-655452"
	I1217 20:36:01.707564  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:01.725762  528764 main.go:143] libmachine: Using SSH client type: native
	I1217 20:36:01.726063  528764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:36:01.726071  528764 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-655452 && echo "functional-655452" | sudo tee /etc/hostname
	I1217 20:36:01.865176  528764 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-655452
	
	I1217 20:36:01.865255  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:01.884852  528764 main.go:143] libmachine: Using SSH client type: native
	I1217 20:36:01.885159  528764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:36:01.885174  528764 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-655452' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-655452/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-655452' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:36:02.016339  528764 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:36:02.016355  528764 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:36:02.016378  528764 ubuntu.go:190] setting up certificates
	I1217 20:36:02.016388  528764 provision.go:84] configureAuth start
	I1217 20:36:02.016451  528764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-655452
	I1217 20:36:02.035106  528764 provision.go:143] copyHostCerts
	I1217 20:36:02.035175  528764 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:36:02.035183  528764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:36:02.035257  528764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:36:02.035375  528764 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:36:02.035379  528764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:36:02.035406  528764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:36:02.035470  528764 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:36:02.035473  528764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:36:02.035496  528764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:36:02.035545  528764 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.functional-655452 san=[127.0.0.1 192.168.49.2 functional-655452 localhost minikube]
	I1217 20:36:02.115164  528764 provision.go:177] copyRemoteCerts
	I1217 20:36:02.115221  528764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:36:02.115260  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:02.139076  528764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:36:02.235601  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:36:02.254294  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 20:36:02.272604  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 20:36:02.290727  528764 provision.go:87] duration metric: took 274.326255ms to configureAuth
	I1217 20:36:02.290752  528764 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:36:02.291001  528764 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:36:02.291105  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:02.309578  528764 main.go:143] libmachine: Using SSH client type: native
	I1217 20:36:02.309891  528764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1217 20:36:02.309902  528764 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:36:02.644802  528764 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:36:02.644817  528764 machine.go:97] duration metric: took 1.086843683s to provisionDockerMachine
	I1217 20:36:02.644827  528764 start.go:293] postStartSetup for "functional-655452" (driver="docker")
	I1217 20:36:02.644838  528764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:36:02.644899  528764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:36:02.644944  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:02.663334  528764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:36:02.759464  528764 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:36:02.762934  528764 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:36:02.762952  528764 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:36:02.762970  528764 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:36:02.763029  528764 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:36:02.763103  528764 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:36:02.763175  528764 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts -> hosts in /etc/test/nested/copy/488412
	I1217 20:36:02.763216  528764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/488412
	I1217 20:36:02.770652  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:36:02.788458  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts --> /etc/test/nested/copy/488412/hosts (40 bytes)
	I1217 20:36:02.805971  528764 start.go:296] duration metric: took 161.129975ms for postStartSetup
	I1217 20:36:02.806055  528764 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:36:02.806105  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:02.832327  528764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:36:02.932517  528764 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:36:02.937022  528764 fix.go:56] duration metric: took 1.399892436s for fixHost
	I1217 20:36:02.937037  528764 start.go:83] releasing machines lock for "functional-655452", held for 1.399929845s
	I1217 20:36:02.937101  528764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-655452
	I1217 20:36:02.954767  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:36:02.954820  528764 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:36:02.954828  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:36:02.954855  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:36:02.954880  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:36:02.954903  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:36:02.954966  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:36:02.955032  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:36:02.955078  528764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
	I1217 20:36:02.972629  528764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
	I1217 20:36:03.082963  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:36:03.101544  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:36:03.119807  528764 ssh_runner.go:195] Run: openssl version
	I1217 20:36:03.126345  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:36:03.134006  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:36:03.141755  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:36:03.145627  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:36:03.145694  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:36:03.186918  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:36:03.196074  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:36:03.205007  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:36:03.212820  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:36:03.216798  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:36:03.216865  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:36:03.260241  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:36:03.268200  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:03.275663  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:36:03.283259  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:03.287077  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:03.287187  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:03.328526  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:36:03.336152  528764 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:36:03.339768  528764 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 20:36:03.343092  528764 ssh_runner.go:195] Run: cat /version.json
	I1217 20:36:03.343166  528764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:36:03.444762  528764 ssh_runner.go:195] Run: systemctl --version
	I1217 20:36:03.450992  528764 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:36:03.489251  528764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:36:03.493525  528764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:36:03.493594  528764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:36:03.501380  528764 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:36:03.501400  528764 start.go:496] detecting cgroup driver to use...
	I1217 20:36:03.501430  528764 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:36:03.501474  528764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:36:03.519927  528764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:36:03.535865  528764 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:36:03.535924  528764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:36:03.553665  528764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:36:03.568077  528764 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:36:03.688788  528764 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:36:03.816391  528764 docker.go:234] disabling docker service ...
	I1217 20:36:03.816445  528764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:36:03.832743  528764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:36:03.846562  528764 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:36:03.965969  528764 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:36:04.109607  528764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:36:04.122680  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:36:04.137683  528764 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:36:04.137752  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.147364  528764 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:36:04.147423  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.157452  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.166810  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.176014  528764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:36:04.184171  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.192938  528764 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.201542  528764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:36:04.210110  528764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:36:04.217743  528764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:36:04.225321  528764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:36:04.332263  528764 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:36:04.503245  528764 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:36:04.503305  528764 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:36:04.508393  528764 start.go:564] Will wait 60s for crictl version
	I1217 20:36:04.508461  528764 ssh_runner.go:195] Run: which crictl
	I1217 20:36:04.512401  528764 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:36:04.541968  528764 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:36:04.542059  528764 ssh_runner.go:195] Run: crio --version
	I1217 20:36:04.568941  528764 ssh_runner.go:195] Run: crio --version
	I1217 20:36:04.602248  528764 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 20:36:04.604894  528764 cli_runner.go:164] Run: docker network inspect functional-655452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:36:04.620832  528764 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:36:04.627460  528764 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 20:36:04.630066  528764 kubeadm.go:884] updating cluster {Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:36:04.630187  528764 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:36:04.630246  528764 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:36:04.668067  528764 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:36:04.668079  528764 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:36:04.668136  528764 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:36:04.698017  528764 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:36:04.698030  528764 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:36:04.698036  528764 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1217 20:36:04.698140  528764 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-655452 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:36:04.698216  528764 ssh_runner.go:195] Run: crio config
	I1217 20:36:04.769162  528764 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 20:36:04.769193  528764 cni.go:84] Creating CNI manager for ""
	I1217 20:36:04.769200  528764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:36:04.769208  528764 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:36:04.769233  528764 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-655452 NodeName:functional-655452 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:36:04.769373  528764 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-655452"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:36:04.769444  528764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 20:36:04.777167  528764 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:36:04.777239  528764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:36:04.784566  528764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 20:36:04.797984  528764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 20:36:04.810563  528764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I1217 20:36:04.823513  528764 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:36:04.827291  528764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:36:04.950251  528764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:36:05.072220  528764 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452 for IP: 192.168.49.2
	I1217 20:36:05.072231  528764 certs.go:195] generating shared ca certs ...
	I1217 20:36:05.072245  528764 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:36:05.072401  528764 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:36:05.072442  528764 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:36:05.072448  528764 certs.go:257] generating profile certs ...
	I1217 20:36:05.072540  528764 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.key
	I1217 20:36:05.072591  528764 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key.aa95dda5
	I1217 20:36:05.072629  528764 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key
	I1217 20:36:05.072739  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:36:05.072768  528764 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:36:05.072780  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:36:05.072805  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:36:05.072827  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:36:05.072848  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:36:05.072891  528764 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:36:05.073535  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:36:05.100676  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:36:05.124485  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:36:05.145313  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:36:05.166267  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 20:36:05.185043  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 20:36:05.202568  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:36:05.220530  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:36:05.238845  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:36:05.257230  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:36:05.275490  528764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:36:05.293936  528764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:36:05.307062  528764 ssh_runner.go:195] Run: openssl version
	I1217 20:36:05.314048  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:36:05.321882  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:36:05.329752  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:36:05.333743  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:36:05.333820  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:36:05.375575  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:36:05.383326  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:36:05.390831  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:36:05.398670  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:36:05.402451  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:36:05.402506  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:36:05.445761  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:36:05.453165  528764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:05.460611  528764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:36:05.468452  528764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:05.472228  528764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:05.472283  528764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:36:05.513950  528764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:36:05.521563  528764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:36:05.525764  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:36:05.567120  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:36:05.608840  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:36:05.649788  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:36:05.692741  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:36:05.738724  528764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:36:05.779654  528764 kubeadm.go:401] StartCluster: {Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:36:05.779744  528764 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:36:05.779806  528764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:36:05.806396  528764 cri.go:89] found id: ""
	I1217 20:36:05.806453  528764 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:36:05.814019  528764 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 20:36:05.814027  528764 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 20:36:05.814076  528764 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 20:36:05.823754  528764 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:36:05.824259  528764 kubeconfig.go:125] found "functional-655452" server: "https://192.168.49.2:8441"
	I1217 20:36:05.825529  528764 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 20:36:05.834629  528764 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 20:21:29.177912325 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 20:36:04.817890668 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 20:36:05.834639  528764 kubeadm.go:1161] stopping kube-system containers ...
	I1217 20:36:05.834650  528764 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1217 20:36:05.834705  528764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:36:05.867919  528764 cri.go:89] found id: ""
	I1217 20:36:05.867989  528764 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 20:36:05.885438  528764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:36:05.893366  528764 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 17 20:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 17 20:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 17 20:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 17 20:25 /etc/kubernetes/scheduler.conf
	
	I1217 20:36:05.893420  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 20:36:05.901137  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 20:36:05.909490  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:36:05.909550  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:36:05.916910  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 20:36:05.924811  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:36:05.924869  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:36:05.932331  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 20:36:05.940039  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:36:05.940108  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:36:05.947225  528764 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:36:05.955062  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:36:06.001485  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:36:07.569758  528764 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.568246795s)
	I1217 20:36:07.569817  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:36:07.780039  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:36:07.827231  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 20:36:07.887398  528764 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:36:07.887476  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:08.388398  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:08.888310  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:09.388248  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:09.887698  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:10.387671  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:10.887697  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:11.387734  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:11.888366  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:12.388180  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:12.888379  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:13.387943  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:13.887667  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:14.388477  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:14.888341  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:15.388247  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:15.888425  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:16.388580  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:16.888356  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:17.387968  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:17.888549  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:18.388370  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:18.887715  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:19.387565  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:19.887775  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:20.388470  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:20.888348  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:21.388333  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:21.888012  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:22.387716  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:22.887746  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:23.388395  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:23.887695  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:24.387756  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:24.887696  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:25.388493  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:25.888451  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:26.387822  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:26.888379  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:27.388361  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:27.888017  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:28.388584  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:28.887763  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:29.388547  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:29.887757  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:30.387781  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:30.888609  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:31.387635  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:31.888171  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:32.388412  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:32.888528  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:33.387792  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:33.888580  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:34.388192  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:34.888392  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:35.388250  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:35.888600  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:36.388467  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:36.887895  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:37.387730  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:37.888542  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:38.388614  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:38.888493  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:39.387705  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:39.887637  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:40.388516  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:40.887751  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:41.387675  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:41.888681  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:42.387731  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:42.887637  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:43.388408  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:43.888201  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:44.387929  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:44.888382  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:45.387742  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:45.887563  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:46.388569  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:46.888449  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:47.388453  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:47.888066  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:48.387738  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:48.888486  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:49.388004  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:49.887783  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:50.388587  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:50.887797  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:51.388583  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:51.888281  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:52.387751  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:52.888303  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:53.388442  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:53.887964  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:54.387766  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:54.887669  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:55.388318  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:55.888676  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:56.387669  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:56.888505  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:57.387758  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:57.888403  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:58.388534  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:58.887712  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:59.388454  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:36:59.888308  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:00.387737  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:00.887766  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:01.387557  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:01.888179  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:02.387975  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:02.887807  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:03.387768  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:03.887658  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:04.387571  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:04.887653  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:05.388569  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:05.887566  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:06.387577  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:06.887577  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:07.388433  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:07.887764  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:07.887843  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:07.914157  528764 cri.go:89] found id: ""
	I1217 20:37:07.914172  528764 logs.go:282] 0 containers: []
	W1217 20:37:07.914179  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:07.914184  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:07.914241  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:07.939801  528764 cri.go:89] found id: ""
	I1217 20:37:07.939815  528764 logs.go:282] 0 containers: []
	W1217 20:37:07.939823  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:07.939828  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:07.939892  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:07.966197  528764 cri.go:89] found id: ""
	I1217 20:37:07.966213  528764 logs.go:282] 0 containers: []
	W1217 20:37:07.966221  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:07.966226  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:07.966284  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:07.997124  528764 cri.go:89] found id: ""
	I1217 20:37:07.997138  528764 logs.go:282] 0 containers: []
	W1217 20:37:07.997145  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:07.997150  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:07.997211  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:08.028280  528764 cri.go:89] found id: ""
	I1217 20:37:08.028295  528764 logs.go:282] 0 containers: []
	W1217 20:37:08.028302  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:08.028308  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:08.028368  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:08.058094  528764 cri.go:89] found id: ""
	I1217 20:37:08.058109  528764 logs.go:282] 0 containers: []
	W1217 20:37:08.058116  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:08.058121  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:08.058185  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:08.085720  528764 cri.go:89] found id: ""
	I1217 20:37:08.085736  528764 logs.go:282] 0 containers: []
	W1217 20:37:08.085744  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:08.085752  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:08.085763  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:08.150624  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:08.142553   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.142992   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.144649   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.145220   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.146782   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:08.142553   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.142992   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.144649   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.145220   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:08.146782   11126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:08.150636  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:08.150647  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:08.217929  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:08.217949  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:08.250550  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:08.250567  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:08.318542  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:08.318562  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:10.835004  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:10.846829  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:10.846892  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:10.877739  528764 cri.go:89] found id: ""
	I1217 20:37:10.877756  528764 logs.go:282] 0 containers: []
	W1217 20:37:10.877762  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:10.877768  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:10.877829  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:10.903713  528764 cri.go:89] found id: ""
	I1217 20:37:10.903727  528764 logs.go:282] 0 containers: []
	W1217 20:37:10.903735  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:10.903740  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:10.903802  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:10.931733  528764 cri.go:89] found id: ""
	I1217 20:37:10.931747  528764 logs.go:282] 0 containers: []
	W1217 20:37:10.931754  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:10.931759  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:10.931818  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:10.957707  528764 cri.go:89] found id: ""
	I1217 20:37:10.957722  528764 logs.go:282] 0 containers: []
	W1217 20:37:10.957729  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:10.957735  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:10.957793  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:10.986438  528764 cri.go:89] found id: ""
	I1217 20:37:10.986452  528764 logs.go:282] 0 containers: []
	W1217 20:37:10.986459  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:10.986464  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:10.986530  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:11.014361  528764 cri.go:89] found id: ""
	I1217 20:37:11.014385  528764 logs.go:282] 0 containers: []
	W1217 20:37:11.014393  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:11.014402  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:11.014462  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:11.041366  528764 cri.go:89] found id: ""
	I1217 20:37:11.041381  528764 logs.go:282] 0 containers: []
	W1217 20:37:11.041388  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:11.041401  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:11.041411  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:11.056502  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:11.056519  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:11.122467  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:11.114176   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.114734   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.116369   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.116862   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.118392   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:11.114176   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.114734   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.116369   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.116862   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:11.118392   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:11.122477  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:11.122486  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:11.190244  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:11.190265  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:11.220700  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:11.220717  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:13.792757  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:13.802840  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:13.802899  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:13.836386  528764 cri.go:89] found id: ""
	I1217 20:37:13.836401  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.836408  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:13.836415  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:13.836471  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:13.870570  528764 cri.go:89] found id: ""
	I1217 20:37:13.870585  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.870592  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:13.870597  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:13.870656  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:13.898823  528764 cri.go:89] found id: ""
	I1217 20:37:13.898837  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.898845  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:13.898850  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:13.898908  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:13.926200  528764 cri.go:89] found id: ""
	I1217 20:37:13.926214  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.926221  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:13.926226  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:13.926284  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:13.952625  528764 cri.go:89] found id: ""
	I1217 20:37:13.952639  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.952647  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:13.952652  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:13.952711  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:13.978517  528764 cri.go:89] found id: ""
	I1217 20:37:13.978531  528764 logs.go:282] 0 containers: []
	W1217 20:37:13.978539  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:13.978544  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:13.978602  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:14.010201  528764 cri.go:89] found id: ""
	I1217 20:37:14.010215  528764 logs.go:282] 0 containers: []
	W1217 20:37:14.010223  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:14.010231  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:14.010242  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:14.075917  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:14.075936  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:14.091123  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:14.091142  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:14.155624  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:14.146305   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.147214   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.149124   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.149628   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.151335   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:14.146305   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.147214   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.149124   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.149628   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:14.151335   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:14.155636  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:14.155647  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:14.224215  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:14.224237  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:16.756286  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:16.766692  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:16.766752  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:16.795671  528764 cri.go:89] found id: ""
	I1217 20:37:16.795692  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.795700  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:16.795705  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:16.795762  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:16.829850  528764 cri.go:89] found id: ""
	I1217 20:37:16.829863  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.829870  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:16.829875  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:16.829932  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:16.860495  528764 cri.go:89] found id: ""
	I1217 20:37:16.860509  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.860516  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:16.860521  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:16.860580  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:16.888120  528764 cri.go:89] found id: ""
	I1217 20:37:16.888133  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.888141  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:16.888146  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:16.888201  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:16.918449  528764 cri.go:89] found id: ""
	I1217 20:37:16.918463  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.918469  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:16.918484  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:16.918542  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:16.948626  528764 cri.go:89] found id: ""
	I1217 20:37:16.948652  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.948659  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:16.948665  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:16.948729  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:16.977608  528764 cri.go:89] found id: ""
	I1217 20:37:16.977622  528764 logs.go:282] 0 containers: []
	W1217 20:37:16.977630  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:16.977637  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:16.977647  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:17.042493  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:17.042513  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:17.057131  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:17.057148  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:17.125378  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:17.116772   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.117492   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.119295   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.119699   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.121218   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:17.116772   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.117492   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.119295   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.119699   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:17.121218   11447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:17.125389  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:17.125400  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:17.192802  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:17.192822  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:19.720869  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:19.730761  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:19.730822  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:19.757595  528764 cri.go:89] found id: ""
	I1217 20:37:19.757609  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.757617  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:19.757622  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:19.757679  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:19.783074  528764 cri.go:89] found id: ""
	I1217 20:37:19.783087  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.783102  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:19.783108  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:19.783165  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:19.810405  528764 cri.go:89] found id: ""
	I1217 20:37:19.810419  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.810426  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:19.810432  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:19.810493  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:19.837744  528764 cri.go:89] found id: ""
	I1217 20:37:19.837758  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.837766  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:19.837771  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:19.837828  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:19.873857  528764 cri.go:89] found id: ""
	I1217 20:37:19.873872  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.873879  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:19.873884  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:19.873952  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:19.902376  528764 cri.go:89] found id: ""
	I1217 20:37:19.902390  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.902397  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:19.902402  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:19.902477  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:19.928530  528764 cri.go:89] found id: ""
	I1217 20:37:19.928544  528764 logs.go:282] 0 containers: []
	W1217 20:37:19.928552  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:19.928559  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:19.928570  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:19.993175  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:19.984569   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.985337   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.986955   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.987601   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.989181   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:19.984569   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.985337   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.986955   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.987601   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:19.989181   11545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:19.993185  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:19.993196  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:20.066305  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:20.066326  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:20.099789  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:20.099806  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:20.165283  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:20.165304  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:22.681290  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:22.691134  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:22.691202  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:22.723831  528764 cri.go:89] found id: ""
	I1217 20:37:22.723845  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.723862  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:22.723868  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:22.723933  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:22.749315  528764 cri.go:89] found id: ""
	I1217 20:37:22.749329  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.749336  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:22.749341  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:22.749396  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:22.773712  528764 cri.go:89] found id: ""
	I1217 20:37:22.773738  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.773746  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:22.773751  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:22.773825  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:22.799128  528764 cri.go:89] found id: ""
	I1217 20:37:22.799147  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.799154  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:22.799159  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:22.799214  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:22.830333  528764 cri.go:89] found id: ""
	I1217 20:37:22.830347  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.830354  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:22.830359  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:22.830414  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:22.857658  528764 cri.go:89] found id: ""
	I1217 20:37:22.857671  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.857678  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:22.857683  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:22.857740  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:22.892187  528764 cri.go:89] found id: ""
	I1217 20:37:22.892202  528764 logs.go:282] 0 containers: []
	W1217 20:37:22.892209  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:22.892217  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:22.892226  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:22.963552  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:22.963572  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:22.992259  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:22.992274  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:23.058615  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:23.058636  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:23.073409  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:23.073442  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:23.138641  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:23.129846   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.130677   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.132360   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.132912   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.134580   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:23.129846   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.130677   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.132360   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.132912   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:23.134580   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:25.638919  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:25.648946  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:25.649032  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:25.678111  528764 cri.go:89] found id: ""
	I1217 20:37:25.678127  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.678134  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:25.678140  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:25.678230  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:25.704834  528764 cri.go:89] found id: ""
	I1217 20:37:25.704848  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.704855  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:25.704861  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:25.704943  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:25.731274  528764 cri.go:89] found id: ""
	I1217 20:37:25.731287  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.731295  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:25.731300  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:25.731354  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:25.756601  528764 cri.go:89] found id: ""
	I1217 20:37:25.756615  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.756622  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:25.756628  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:25.756689  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:25.781743  528764 cri.go:89] found id: ""
	I1217 20:37:25.781757  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.781764  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:25.781787  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:25.781846  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:25.810686  528764 cri.go:89] found id: ""
	I1217 20:37:25.810699  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.810718  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:25.810724  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:25.810791  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:25.861184  528764 cri.go:89] found id: ""
	I1217 20:37:25.861200  528764 logs.go:282] 0 containers: []
	W1217 20:37:25.861207  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:25.861215  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:25.861237  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:25.937980  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:25.938000  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:25.953961  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:25.953980  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:26.020362  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:26.011417   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.012115   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.013886   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.014423   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.016131   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:26.011417   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.012115   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.013886   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.014423   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:26.016131   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:26.020376  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:26.020387  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:26.092647  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:26.092669  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:28.622440  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:28.632675  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:28.632735  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:28.657198  528764 cri.go:89] found id: ""
	I1217 20:37:28.657213  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.657220  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:28.657226  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:28.657284  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:28.683432  528764 cri.go:89] found id: ""
	I1217 20:37:28.683446  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.683453  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:28.683458  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:28.683513  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:28.708948  528764 cri.go:89] found id: ""
	I1217 20:37:28.708962  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.708969  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:28.708975  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:28.709030  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:28.738615  528764 cri.go:89] found id: ""
	I1217 20:37:28.738629  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.738637  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:28.738642  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:28.738697  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:28.764458  528764 cri.go:89] found id: ""
	I1217 20:37:28.764472  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.764479  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:28.764484  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:28.764544  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:28.789220  528764 cri.go:89] found id: ""
	I1217 20:37:28.789234  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.789242  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:28.789247  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:28.789302  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:28.813820  528764 cri.go:89] found id: ""
	I1217 20:37:28.813835  528764 logs.go:282] 0 containers: []
	W1217 20:37:28.813841  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:28.813848  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:28.813869  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:28.896349  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:28.887880   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.888593   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.890336   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.890868   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.892434   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:28.887880   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.888593   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.890336   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.890868   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:28.892434   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:28.896359  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:28.896369  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:28.964976  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:28.964996  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:28.995089  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:28.995105  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:29.073565  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:29.073593  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:31.589038  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:31.599070  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:31.599131  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:31.624604  528764 cri.go:89] found id: ""
	I1217 20:37:31.624619  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.624626  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:31.624631  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:31.624688  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:31.650593  528764 cri.go:89] found id: ""
	I1217 20:37:31.650608  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.650616  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:31.650621  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:31.650684  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:31.679069  528764 cri.go:89] found id: ""
	I1217 20:37:31.679084  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.679091  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:31.679096  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:31.679153  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:31.709079  528764 cri.go:89] found id: ""
	I1217 20:37:31.709093  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.709100  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:31.709105  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:31.709162  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:31.740223  528764 cri.go:89] found id: ""
	I1217 20:37:31.740237  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.740244  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:31.740252  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:31.740307  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:31.771855  528764 cri.go:89] found id: ""
	I1217 20:37:31.771869  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.771877  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:31.771883  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:31.771942  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:31.798992  528764 cri.go:89] found id: ""
	I1217 20:37:31.799006  528764 logs.go:282] 0 containers: []
	W1217 20:37:31.799013  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:31.799021  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:31.799031  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:31.876265  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:31.876285  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:31.912678  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:31.912694  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:31.979473  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:31.979494  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:31.994138  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:31.994154  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:32.058919  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:32.050836   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.051662   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.053342   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.053690   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.055195   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:32.050836   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.051662   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.053342   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.053690   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:32.055195   11991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:34.560573  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:34.570410  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:34.570477  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:34.595394  528764 cri.go:89] found id: ""
	I1217 20:37:34.595407  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.595415  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:34.595420  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:34.595474  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:34.620347  528764 cri.go:89] found id: ""
	I1217 20:37:34.620362  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.620376  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:34.620382  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:34.620444  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:34.646173  528764 cri.go:89] found id: ""
	I1217 20:37:34.646188  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.646195  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:34.646200  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:34.646259  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:34.675076  528764 cri.go:89] found id: ""
	I1217 20:37:34.675090  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.675098  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:34.675103  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:34.675160  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:34.700382  528764 cri.go:89] found id: ""
	I1217 20:37:34.700396  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.700403  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:34.700414  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:34.700479  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:34.727372  528764 cri.go:89] found id: ""
	I1217 20:37:34.727387  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.727394  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:34.727400  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:34.727456  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:34.753290  528764 cri.go:89] found id: ""
	I1217 20:37:34.753305  528764 logs.go:282] 0 containers: []
	W1217 20:37:34.753312  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:34.753319  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:34.753331  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:34.782001  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:34.782019  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:34.847492  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:34.847511  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:34.863498  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:34.863515  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:34.939936  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:34.931817   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.932582   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.934298   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.934763   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.936241   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:34.931817   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.932582   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.934298   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.934763   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:34.936241   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:34.939947  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:34.939958  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:37.511892  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:37.522041  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:37.522101  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:37.546092  528764 cri.go:89] found id: ""
	I1217 20:37:37.546106  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.546113  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:37.546119  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:37.546179  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:37.571827  528764 cri.go:89] found id: ""
	I1217 20:37:37.571841  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.571848  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:37.571853  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:37.571912  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:37.597752  528764 cri.go:89] found id: ""
	I1217 20:37:37.597766  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.597774  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:37.597779  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:37.597840  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:37.624088  528764 cri.go:89] found id: ""
	I1217 20:37:37.624102  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.624109  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:37.624114  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:37.624170  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:37.651097  528764 cri.go:89] found id: ""
	I1217 20:37:37.651112  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.651119  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:37.651125  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:37.651188  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:37.678706  528764 cri.go:89] found id: ""
	I1217 20:37:37.678720  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.678728  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:37.678743  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:37.678804  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:37.705805  528764 cri.go:89] found id: ""
	I1217 20:37:37.705817  528764 logs.go:282] 0 containers: []
	W1217 20:37:37.705825  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:37.705833  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:37.705844  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:37.721021  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:37.721041  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:37.788297  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:37.779817   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.780331   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.782117   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.782612   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.784219   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:37.779817   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.780331   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.782117   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.782612   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:37.784219   12180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:37.788308  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:37.788318  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:37.865227  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:37.865247  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:37.897290  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:37.897308  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:40.462446  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:40.472823  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:40.472885  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:40.502899  528764 cri.go:89] found id: ""
	I1217 20:37:40.502914  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.502926  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:40.502931  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:40.502988  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:40.528131  528764 cri.go:89] found id: ""
	I1217 20:37:40.528144  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.528151  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:40.528156  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:40.528214  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:40.552632  528764 cri.go:89] found id: ""
	I1217 20:37:40.552646  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.552653  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:40.552659  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:40.552715  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:40.578013  528764 cri.go:89] found id: ""
	I1217 20:37:40.578028  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.578035  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:40.578042  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:40.578100  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:40.604172  528764 cri.go:89] found id: ""
	I1217 20:37:40.604186  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.604193  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:40.604198  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:40.604253  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:40.629837  528764 cri.go:89] found id: ""
	I1217 20:37:40.629851  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.629867  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:40.629872  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:40.629931  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:40.656555  528764 cri.go:89] found id: ""
	I1217 20:37:40.656568  528764 logs.go:282] 0 containers: []
	W1217 20:37:40.656576  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:40.656583  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:40.656593  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:40.670930  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:40.670946  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:40.736814  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:40.728916   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.729413   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.731058   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.731457   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.733254   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:40.728916   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.729413   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.731058   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.731457   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:40.733254   12288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:40.736824  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:40.736835  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:40.803782  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:40.803800  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:40.851556  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:40.851572  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:43.430627  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:43.440939  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:43.441000  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:43.470749  528764 cri.go:89] found id: ""
	I1217 20:37:43.470764  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.470771  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:43.470777  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:43.470833  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:43.495753  528764 cri.go:89] found id: ""
	I1217 20:37:43.495766  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.495774  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:43.495779  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:43.495836  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:43.521880  528764 cri.go:89] found id: ""
	I1217 20:37:43.521896  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.521903  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:43.521908  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:43.521971  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:43.547990  528764 cri.go:89] found id: ""
	I1217 20:37:43.548004  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.548012  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:43.548018  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:43.548080  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:43.576401  528764 cri.go:89] found id: ""
	I1217 20:37:43.576415  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.576422  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:43.576427  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:43.576485  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:43.604828  528764 cri.go:89] found id: ""
	I1217 20:37:43.604840  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.604848  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:43.604853  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:43.604909  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:43.636907  528764 cri.go:89] found id: ""
	I1217 20:37:43.636920  528764 logs.go:282] 0 containers: []
	W1217 20:37:43.636927  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:43.636935  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:43.636945  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:43.701148  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:43.701165  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:43.715342  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:43.715357  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:43.787937  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:43.780156   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.780601   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.782216   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.782718   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.784167   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:43.780156   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.780601   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.782216   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.782718   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:43.784167   12396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:43.787957  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:43.787968  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:43.858959  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:43.858978  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:46.395799  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:46.406118  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:46.406190  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:46.433062  528764 cri.go:89] found id: ""
	I1217 20:37:46.433076  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.433083  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:46.433089  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:46.433151  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:46.459553  528764 cri.go:89] found id: ""
	I1217 20:37:46.459568  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.459575  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:46.459604  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:46.459668  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:46.484831  528764 cri.go:89] found id: ""
	I1217 20:37:46.484845  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.484853  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:46.484858  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:46.484920  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:46.509669  528764 cri.go:89] found id: ""
	I1217 20:37:46.509683  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.509690  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:46.509695  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:46.509752  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:46.534227  528764 cri.go:89] found id: ""
	I1217 20:37:46.534242  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.534254  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:46.534260  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:46.534316  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:46.563383  528764 cri.go:89] found id: ""
	I1217 20:37:46.563397  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.563405  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:46.563411  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:46.563476  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:46.589321  528764 cri.go:89] found id: ""
	I1217 20:37:46.589335  528764 logs.go:282] 0 containers: []
	W1217 20:37:46.589342  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:46.589350  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:46.589364  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:46.654894  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:46.654914  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:46.669806  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:46.669822  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:46.731726  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:46.723562   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.724128   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.725849   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.726343   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.727890   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:46.723562   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.724128   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.725849   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.726343   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:46.727890   12499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:46.731737  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:46.731763  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:46.799300  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:46.799320  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:49.348034  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:49.358157  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:49.358218  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:49.382823  528764 cri.go:89] found id: ""
	I1217 20:37:49.382837  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.382844  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:49.382849  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:49.382917  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:49.409079  528764 cri.go:89] found id: ""
	I1217 20:37:49.409094  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.409101  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:49.409106  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:49.409162  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:49.434313  528764 cri.go:89] found id: ""
	I1217 20:37:49.434327  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.434340  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:49.434354  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:49.434426  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:49.460512  528764 cri.go:89] found id: ""
	I1217 20:37:49.460527  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.460535  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:49.460551  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:49.460609  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:49.486735  528764 cri.go:89] found id: ""
	I1217 20:37:49.486748  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.486756  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:49.486762  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:49.486830  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:49.512071  528764 cri.go:89] found id: ""
	I1217 20:37:49.512085  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.512092  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:49.512098  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:49.512155  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:49.541263  528764 cri.go:89] found id: ""
	I1217 20:37:49.541277  528764 logs.go:282] 0 containers: []
	W1217 20:37:49.541284  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:49.541293  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:49.541310  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:49.570361  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:49.570378  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:49.638598  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:49.638618  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:49.653362  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:49.653381  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:49.715767  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:49.708109   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.708580   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.710179   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.710574   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.712039   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:49.708109   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.708580   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.710179   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.710574   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:49.712039   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:49.715778  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:49.715788  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:52.283800  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:52.293434  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:52.293494  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:52.318791  528764 cri.go:89] found id: ""
	I1217 20:37:52.318805  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.318812  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:52.318818  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:52.318876  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:52.344510  528764 cri.go:89] found id: ""
	I1217 20:37:52.344525  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.344543  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:52.344549  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:52.344607  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:52.369118  528764 cri.go:89] found id: ""
	I1217 20:37:52.369132  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.369140  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:52.369145  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:52.369200  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:52.394333  528764 cri.go:89] found id: ""
	I1217 20:37:52.394346  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.394377  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:52.394383  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:52.394448  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:52.419501  528764 cri.go:89] found id: ""
	I1217 20:37:52.419525  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.419532  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:52.419537  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:52.419626  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:52.448909  528764 cri.go:89] found id: ""
	I1217 20:37:52.448923  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.448930  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:52.448936  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:52.449018  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:52.478490  528764 cri.go:89] found id: ""
	I1217 20:37:52.478513  528764 logs.go:282] 0 containers: []
	W1217 20:37:52.478521  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:52.478529  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:52.478539  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:52.542920  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:52.542939  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:52.558035  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:52.558052  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:52.621690  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:52.613254   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.613953   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.615616   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.615996   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.617541   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:52.613254   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.613953   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.615616   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.615996   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:52.617541   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:52.621710  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:52.621721  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:52.689051  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:52.689070  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:55.225326  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:55.235484  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:55.235545  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:55.260455  528764 cri.go:89] found id: ""
	I1217 20:37:55.260469  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.260477  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:55.260482  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:55.260542  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:55.285381  528764 cri.go:89] found id: ""
	I1217 20:37:55.285396  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.285404  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:55.285409  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:55.285464  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:55.311167  528764 cri.go:89] found id: ""
	I1217 20:37:55.311181  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.311188  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:55.311194  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:55.311266  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:55.336553  528764 cri.go:89] found id: ""
	I1217 20:37:55.336568  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.336575  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:55.336580  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:55.336636  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:55.362555  528764 cri.go:89] found id: ""
	I1217 20:37:55.362569  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.362576  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:55.362582  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:55.362636  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:55.392446  528764 cri.go:89] found id: ""
	I1217 20:37:55.392460  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.392468  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:55.392473  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:55.392529  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:55.421227  528764 cri.go:89] found id: ""
	I1217 20:37:55.421242  528764 logs.go:282] 0 containers: []
	W1217 20:37:55.421250  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:55.421257  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:55.421267  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:55.452467  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:55.452485  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:55.520333  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:55.520354  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:37:55.535397  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:55.535423  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:55.600267  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:55.591521   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.592108   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.593636   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.594292   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.596071   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:55.591521   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.592108   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.593636   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.594292   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:55.596071   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:55.600278  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:55.600290  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:58.172840  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:37:58.183231  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:37:58.183290  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:37:58.207527  528764 cri.go:89] found id: ""
	I1217 20:37:58.207541  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.207548  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:37:58.207553  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:37:58.207649  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:37:58.232533  528764 cri.go:89] found id: ""
	I1217 20:37:58.232547  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.232555  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:37:58.232559  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:37:58.232613  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:37:58.257969  528764 cri.go:89] found id: ""
	I1217 20:37:58.257983  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.257990  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:37:58.257996  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:37:58.258051  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:37:58.283047  528764 cri.go:89] found id: ""
	I1217 20:37:58.283060  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.283067  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:37:58.283072  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:37:58.283126  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:37:58.308494  528764 cri.go:89] found id: ""
	I1217 20:37:58.308508  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.308515  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:37:58.308521  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:37:58.308578  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:37:58.333008  528764 cri.go:89] found id: ""
	I1217 20:37:58.333022  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.333029  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:37:58.333035  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:37:58.333087  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:37:58.363097  528764 cri.go:89] found id: ""
	I1217 20:37:58.363111  528764 logs.go:282] 0 containers: []
	W1217 20:37:58.363118  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:37:58.363126  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:37:58.363145  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:37:58.428415  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:37:58.419398   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.420110   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.421854   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.422467   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.424133   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:37:58.419398   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.420110   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.421854   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.422467   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:37:58.424133   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:37:58.428426  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:37:58.428437  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:37:58.497159  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:37:58.497179  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:37:58.528904  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:37:58.528921  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:37:58.594783  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:37:58.594803  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:01.111545  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:01.123462  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:01.123520  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:01.152472  528764 cri.go:89] found id: ""
	I1217 20:38:01.152487  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.152494  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:01.152499  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:01.152561  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:01.178899  528764 cri.go:89] found id: ""
	I1217 20:38:01.178913  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.178921  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:01.178926  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:01.178983  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:01.206687  528764 cri.go:89] found id: ""
	I1217 20:38:01.206701  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.206709  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:01.206714  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:01.206771  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:01.232497  528764 cri.go:89] found id: ""
	I1217 20:38:01.232511  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.232519  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:01.232524  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:01.232579  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:01.261011  528764 cri.go:89] found id: ""
	I1217 20:38:01.261025  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.261032  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:01.261037  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:01.261098  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:01.286117  528764 cri.go:89] found id: ""
	I1217 20:38:01.286132  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.286150  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:01.286156  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:01.286222  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:01.312040  528764 cri.go:89] found id: ""
	I1217 20:38:01.312055  528764 logs.go:282] 0 containers: []
	W1217 20:38:01.312062  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:01.312069  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:01.312080  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:01.382670  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:01.382692  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:01.414378  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:01.414394  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:01.482999  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:01.483019  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:01.497972  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:01.497987  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:01.566351  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:01.557988   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.558761   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.560515   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.561020   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.562479   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:01.557988   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.558761   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.560515   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.561020   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:01.562479   13036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:04.066612  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:04.079947  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:04.080010  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:04.114202  528764 cri.go:89] found id: ""
	I1217 20:38:04.114216  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.114223  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:04.114228  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:04.114294  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:04.144225  528764 cri.go:89] found id: ""
	I1217 20:38:04.144238  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.144246  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:04.144250  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:04.144306  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:04.174041  528764 cri.go:89] found id: ""
	I1217 20:38:04.174055  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.174066  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:04.174072  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:04.174138  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:04.198282  528764 cri.go:89] found id: ""
	I1217 20:38:04.198296  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.198304  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:04.198309  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:04.198381  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:04.223855  528764 cri.go:89] found id: ""
	I1217 20:38:04.223869  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.223888  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:04.223897  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:04.223965  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:04.249576  528764 cri.go:89] found id: ""
	I1217 20:38:04.249592  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.249599  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:04.249604  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:04.249667  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:04.278330  528764 cri.go:89] found id: ""
	I1217 20:38:04.278344  528764 logs.go:282] 0 containers: []
	W1217 20:38:04.278351  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:04.278359  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:04.278369  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:04.346075  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:04.346098  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:04.379272  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:04.379287  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:04.446775  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:04.446795  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:04.461788  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:04.461804  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:04.526831  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:04.519073   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.519534   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.520989   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.521420   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.522964   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:04.519073   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.519534   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.520989   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.521420   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:04.522964   13145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:07.028018  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:07.038329  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:07.038394  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:07.070882  528764 cri.go:89] found id: ""
	I1217 20:38:07.070911  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.070919  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:07.070925  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:07.070991  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:07.104836  528764 cri.go:89] found id: ""
	I1217 20:38:07.104850  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.104857  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:07.104863  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:07.104932  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:07.141894  528764 cri.go:89] found id: ""
	I1217 20:38:07.141908  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.141916  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:07.141921  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:07.141990  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:07.169039  528764 cri.go:89] found id: ""
	I1217 20:38:07.169053  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.169061  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:07.169066  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:07.169123  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:07.194478  528764 cri.go:89] found id: ""
	I1217 20:38:07.194501  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.194509  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:07.194514  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:07.194579  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:07.219609  528764 cri.go:89] found id: ""
	I1217 20:38:07.219624  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.219632  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:07.219638  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:07.219705  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:07.243819  528764 cri.go:89] found id: ""
	I1217 20:38:07.243832  528764 logs.go:282] 0 containers: []
	W1217 20:38:07.243840  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:07.243847  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:07.243857  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:07.311464  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:07.311483  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:07.343698  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:07.343751  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:07.410312  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:07.410332  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:07.424918  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:07.424934  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:07.487872  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:07.479467   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.480073   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.481799   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.482374   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.484022   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:07.479467   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.480073   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.481799   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.482374   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:07.484022   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:09.989569  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:10.015377  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:10.015448  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:10.044563  528764 cri.go:89] found id: ""
	I1217 20:38:10.044582  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.044590  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:10.044596  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:10.044659  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:10.082544  528764 cri.go:89] found id: ""
	I1217 20:38:10.082572  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.082579  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:10.082585  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:10.082655  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:10.111998  528764 cri.go:89] found id: ""
	I1217 20:38:10.112021  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.112028  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:10.112034  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:10.112090  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:10.143847  528764 cri.go:89] found id: ""
	I1217 20:38:10.143875  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.143883  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:10.143888  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:10.143959  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:10.169935  528764 cri.go:89] found id: ""
	I1217 20:38:10.169948  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.169956  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:10.169961  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:10.170035  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:10.199354  528764 cri.go:89] found id: ""
	I1217 20:38:10.199367  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.199389  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:10.199395  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:10.199469  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:10.224921  528764 cri.go:89] found id: ""
	I1217 20:38:10.224934  528764 logs.go:282] 0 containers: []
	W1217 20:38:10.224942  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:10.224950  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:10.224961  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:10.292927  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:10.292947  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:10.321993  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:10.322010  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:10.388855  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:10.388876  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:10.404211  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:10.404228  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:10.466886  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:10.458803   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.459494   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.461201   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.461646   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.463180   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:10.458803   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.459494   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.461201   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.461646   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:10.463180   13359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:12.968194  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:12.978084  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:12.978143  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:13.006691  528764 cri.go:89] found id: ""
	I1217 20:38:13.006706  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.006713  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:13.006719  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:13.006779  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:13.032773  528764 cri.go:89] found id: ""
	I1217 20:38:13.032787  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.032795  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:13.032800  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:13.032854  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:13.059128  528764 cri.go:89] found id: ""
	I1217 20:38:13.059142  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.059150  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:13.059155  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:13.059213  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:13.093983  528764 cri.go:89] found id: ""
	I1217 20:38:13.093997  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.094005  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:13.094010  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:13.094066  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:13.136453  528764 cri.go:89] found id: ""
	I1217 20:38:13.136467  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.136474  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:13.136481  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:13.136536  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:13.166382  528764 cri.go:89] found id: ""
	I1217 20:38:13.166396  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.166403  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:13.166409  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:13.166476  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:13.194638  528764 cri.go:89] found id: ""
	I1217 20:38:13.194651  528764 logs.go:282] 0 containers: []
	W1217 20:38:13.194658  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:13.194666  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:13.194689  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:13.261344  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:13.261362  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:13.276057  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:13.276073  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:13.341759  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:13.333469   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.334136   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.335845   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.336372   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.337969   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:13.333469   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.334136   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.335845   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.336372   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:13.337969   13450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:13.341769  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:13.341780  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:13.412593  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:13.412613  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:15.945731  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:15.956026  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:15.956085  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:15.980875  528764 cri.go:89] found id: ""
	I1217 20:38:15.980889  528764 logs.go:282] 0 containers: []
	W1217 20:38:15.980897  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:15.980902  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:15.980956  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:16.017238  528764 cri.go:89] found id: ""
	I1217 20:38:16.017253  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.017260  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:16.017265  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:16.017327  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:16.042662  528764 cri.go:89] found id: ""
	I1217 20:38:16.042676  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.042684  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:16.042700  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:16.042759  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:16.070239  528764 cri.go:89] found id: ""
	I1217 20:38:16.070253  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.070265  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:16.070281  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:16.070344  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:16.101763  528764 cri.go:89] found id: ""
	I1217 20:38:16.101777  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.101785  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:16.101802  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:16.101863  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:16.132808  528764 cri.go:89] found id: ""
	I1217 20:38:16.132822  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.132830  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:16.132835  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:16.132904  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:16.162901  528764 cri.go:89] found id: ""
	I1217 20:38:16.162925  528764 logs.go:282] 0 containers: []
	W1217 20:38:16.162932  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:16.162940  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:16.162951  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:16.177475  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:16.177491  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:16.239620  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:16.231437   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.232280   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.233889   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.234228   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.235746   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:16.231437   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.232280   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.233889   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.234228   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:16.235746   13552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:16.239630  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:16.239641  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:16.306695  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:16.306714  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:16.338739  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:16.338754  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:18.906627  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:18.916877  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:18.916940  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:18.940995  528764 cri.go:89] found id: ""
	I1217 20:38:18.941009  528764 logs.go:282] 0 containers: []
	W1217 20:38:18.941016  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:18.941022  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:18.941090  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:18.967366  528764 cri.go:89] found id: ""
	I1217 20:38:18.967381  528764 logs.go:282] 0 containers: []
	W1217 20:38:18.967388  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:18.967393  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:18.967448  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:18.993265  528764 cri.go:89] found id: ""
	I1217 20:38:18.993279  528764 logs.go:282] 0 containers: []
	W1217 20:38:18.993286  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:18.993291  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:18.993345  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:19.020582  528764 cri.go:89] found id: ""
	I1217 20:38:19.020595  528764 logs.go:282] 0 containers: []
	W1217 20:38:19.020603  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:19.020608  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:19.020666  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:19.045982  528764 cri.go:89] found id: ""
	I1217 20:38:19.045996  528764 logs.go:282] 0 containers: []
	W1217 20:38:19.046005  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:19.046010  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:19.046069  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:19.073910  528764 cri.go:89] found id: ""
	I1217 20:38:19.073923  528764 logs.go:282] 0 containers: []
	W1217 20:38:19.073930  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:19.073936  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:19.073992  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:19.113478  528764 cri.go:89] found id: ""
	I1217 20:38:19.113491  528764 logs.go:282] 0 containers: []
	W1217 20:38:19.113499  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:19.113507  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:19.113517  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:19.181345  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:19.181364  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:19.196831  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:19.196848  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:19.262885  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:19.253623   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.254429   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.256066   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.256658   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.258445   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:19.253623   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.254429   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.256066   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.256658   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:19.258445   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:19.262896  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:19.262907  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:19.332927  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:19.332947  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:21.863218  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:21.873488  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:21.873552  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:21.901892  528764 cri.go:89] found id: ""
	I1217 20:38:21.901907  528764 logs.go:282] 0 containers: []
	W1217 20:38:21.901915  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:21.901930  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:21.901988  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:21.928067  528764 cri.go:89] found id: ""
	I1217 20:38:21.928080  528764 logs.go:282] 0 containers: []
	W1217 20:38:21.928087  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:21.928092  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:21.928149  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:21.953356  528764 cri.go:89] found id: ""
	I1217 20:38:21.953371  528764 logs.go:282] 0 containers: []
	W1217 20:38:21.953378  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:21.953383  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:21.953444  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:21.987415  528764 cri.go:89] found id: ""
	I1217 20:38:21.987428  528764 logs.go:282] 0 containers: []
	W1217 20:38:21.987436  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:21.987442  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:21.987509  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:22.016922  528764 cri.go:89] found id: ""
	I1217 20:38:22.016937  528764 logs.go:282] 0 containers: []
	W1217 20:38:22.016945  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:22.016951  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:22.017009  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:22.044463  528764 cri.go:89] found id: ""
	I1217 20:38:22.044477  528764 logs.go:282] 0 containers: []
	W1217 20:38:22.044484  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:22.044490  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:22.044545  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:22.072815  528764 cri.go:89] found id: ""
	I1217 20:38:22.072828  528764 logs.go:282] 0 containers: []
	W1217 20:38:22.072836  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:22.072844  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:22.072854  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:22.106754  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:22.106778  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:22.177000  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:22.177019  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:22.191928  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:22.191945  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:22.254841  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:22.246562   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.247341   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.249143   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.249615   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.251134   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:22.246562   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.247341   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.249143   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.249615   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:22.251134   13773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:22.254851  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:22.254862  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:24.826532  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:24.836772  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:24.836836  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:24.862693  528764 cri.go:89] found id: ""
	I1217 20:38:24.862706  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.862714  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:24.862719  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:24.862789  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:24.887641  528764 cri.go:89] found id: ""
	I1217 20:38:24.887656  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.887663  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:24.887668  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:24.887737  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:24.913131  528764 cri.go:89] found id: ""
	I1217 20:38:24.913145  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.913168  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:24.913174  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:24.913242  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:24.939734  528764 cri.go:89] found id: ""
	I1217 20:38:24.939748  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.939755  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:24.939760  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:24.939815  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:24.964904  528764 cri.go:89] found id: ""
	I1217 20:38:24.964919  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.964925  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:24.964930  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:24.964988  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:24.990333  528764 cri.go:89] found id: ""
	I1217 20:38:24.990348  528764 logs.go:282] 0 containers: []
	W1217 20:38:24.990355  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:24.990361  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:24.990421  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:25.019872  528764 cri.go:89] found id: ""
	I1217 20:38:25.019887  528764 logs.go:282] 0 containers: []
	W1217 20:38:25.019895  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:25.019902  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:25.019914  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:25.036413  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:25.036438  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:25.112619  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:25.103911   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.104770   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.106472   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.107045   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.108652   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:25.103911   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.104770   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.106472   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.107045   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:25.108652   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:25.112632  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:25.112642  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:25.184378  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:25.184399  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:25.216673  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:25.216689  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:27.785567  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:27.796326  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:27.796391  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:27.825782  528764 cri.go:89] found id: ""
	I1217 20:38:27.825796  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.825804  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:27.825809  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:27.825864  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:27.850601  528764 cri.go:89] found id: ""
	I1217 20:38:27.850614  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.850627  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:27.850632  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:27.850700  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:27.876056  528764 cri.go:89] found id: ""
	I1217 20:38:27.876070  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.876082  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:27.876087  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:27.876151  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:27.901899  528764 cri.go:89] found id: ""
	I1217 20:38:27.901913  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.901920  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:27.901926  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:27.901997  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:27.931527  528764 cri.go:89] found id: ""
	I1217 20:38:27.931541  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.931548  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:27.931553  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:27.931627  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:27.956390  528764 cri.go:89] found id: ""
	I1217 20:38:27.956404  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.956411  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:27.956417  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:27.956473  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:27.985929  528764 cri.go:89] found id: ""
	I1217 20:38:27.985943  528764 logs.go:282] 0 containers: []
	W1217 20:38:27.985951  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:27.985959  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:27.985970  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:28.054474  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:28.054492  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:28.070115  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:28.070132  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:28.151327  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:28.142186   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.142985   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.145194   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.145756   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.147299   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:28.142186   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.142985   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.145194   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.145756   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:28.147299   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:28.151337  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:28.151347  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:28.220518  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:28.220542  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:30.755166  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:30.765287  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:30.765345  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:30.790103  528764 cri.go:89] found id: ""
	I1217 20:38:30.790117  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.790139  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:30.790145  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:30.790209  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:30.815526  528764 cri.go:89] found id: ""
	I1217 20:38:30.815539  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.815547  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:30.815552  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:30.815647  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:30.841851  528764 cri.go:89] found id: ""
	I1217 20:38:30.841864  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.841884  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:30.841890  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:30.841963  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:30.866784  528764 cri.go:89] found id: ""
	I1217 20:38:30.866798  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.866829  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:30.866834  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:30.866922  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:30.892935  528764 cri.go:89] found id: ""
	I1217 20:38:30.892948  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.892956  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:30.892961  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:30.893017  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:30.918525  528764 cri.go:89] found id: ""
	I1217 20:38:30.918545  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.918552  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:30.918558  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:30.918624  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:30.946571  528764 cri.go:89] found id: ""
	I1217 20:38:30.946586  528764 logs.go:282] 0 containers: []
	W1217 20:38:30.946593  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:30.946600  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:30.946620  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:31.016310  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:31.016330  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:31.031710  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:31.031729  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:31.121622  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:31.112851   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.113732   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.115400   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.115997   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.117664   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:31.112851   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.113732   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.115400   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.115997   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:31.117664   14070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:31.121632  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:31.121643  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:31.191069  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:31.191089  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:33.724221  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:33.734488  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:33.734549  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:33.761235  528764 cri.go:89] found id: ""
	I1217 20:38:33.761249  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.761256  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:33.761262  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:33.761322  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:33.787337  528764 cri.go:89] found id: ""
	I1217 20:38:33.787350  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.787358  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:33.787363  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:33.787432  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:33.812684  528764 cri.go:89] found id: ""
	I1217 20:38:33.812706  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.812714  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:33.812719  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:33.812784  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:33.842819  528764 cri.go:89] found id: ""
	I1217 20:38:33.842832  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.842854  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:33.842865  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:33.842929  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:33.868875  528764 cri.go:89] found id: ""
	I1217 20:38:33.868889  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.868897  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:33.868902  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:33.868961  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:33.898309  528764 cri.go:89] found id: ""
	I1217 20:38:33.898323  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.898331  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:33.898356  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:33.898425  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:33.924913  528764 cri.go:89] found id: ""
	I1217 20:38:33.924927  528764 logs.go:282] 0 containers: []
	W1217 20:38:33.924935  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:33.924943  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:33.924957  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:33.990911  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:33.990930  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:34.008276  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:34.008297  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:34.087503  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:34.076899   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.077640   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.079660   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.080396   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.083391   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:34.076899   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.077640   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.079660   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.080396   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:34.083391   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:34.087514  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:34.087537  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:34.163882  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:34.163901  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:36.694644  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:36.704742  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:36.704803  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:36.730340  528764 cri.go:89] found id: ""
	I1217 20:38:36.730354  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.730363  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:36.730369  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:36.730426  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:36.757473  528764 cri.go:89] found id: ""
	I1217 20:38:36.757486  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.757493  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:36.757499  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:36.757554  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:36.786113  528764 cri.go:89] found id: ""
	I1217 20:38:36.786127  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.786135  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:36.786140  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:36.786246  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:36.812385  528764 cri.go:89] found id: ""
	I1217 20:38:36.812399  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.812407  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:36.812412  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:36.812471  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:36.837075  528764 cri.go:89] found id: ""
	I1217 20:38:36.837088  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.837095  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:36.837100  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:36.837156  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:36.866713  528764 cri.go:89] found id: ""
	I1217 20:38:36.866727  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.866734  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:36.866740  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:36.866808  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:36.896063  528764 cri.go:89] found id: ""
	I1217 20:38:36.896078  528764 logs.go:282] 0 containers: []
	W1217 20:38:36.896085  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:36.896093  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:36.896106  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:36.961772  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:36.961793  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:36.976619  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:36.976637  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:37.049152  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:37.040423   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.041223   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.042928   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.043675   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.045309   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:37.040423   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.041223   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.042928   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.043675   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:37.045309   14284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:37.049163  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:37.049174  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:37.119769  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:37.119788  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:39.651068  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:39.661185  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:39.661251  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:39.686602  528764 cri.go:89] found id: ""
	I1217 20:38:39.686616  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.686623  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:39.686628  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:39.686685  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:39.711563  528764 cri.go:89] found id: ""
	I1217 20:38:39.711577  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.711602  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:39.711608  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:39.711674  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:39.738013  528764 cri.go:89] found id: ""
	I1217 20:38:39.738027  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.738034  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:39.738039  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:39.738094  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:39.763309  528764 cri.go:89] found id: ""
	I1217 20:38:39.763323  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.763330  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:39.763336  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:39.763396  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:39.788615  528764 cri.go:89] found id: ""
	I1217 20:38:39.788628  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.788640  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:39.788645  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:39.788701  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:39.813921  528764 cri.go:89] found id: ""
	I1217 20:38:39.813935  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.813942  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:39.813948  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:39.814006  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:39.843230  528764 cri.go:89] found id: ""
	I1217 20:38:39.843244  528764 logs.go:282] 0 containers: []
	W1217 20:38:39.843252  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:39.843260  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:39.843271  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:39.857938  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:39.857954  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:39.921708  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:39.913994   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.914417   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.915990   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.916326   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.917797   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:39.913994   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.914417   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.915990   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.916326   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:39.917797   14388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:39.921717  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:39.921730  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:39.992421  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:39.992444  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:40.032432  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:40.032451  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:42.605010  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:42.614872  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:42.614934  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:42.639899  528764 cri.go:89] found id: ""
	I1217 20:38:42.639913  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.639920  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:42.639926  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:42.639996  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:42.670021  528764 cri.go:89] found id: ""
	I1217 20:38:42.670036  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.670049  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:42.670055  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:42.670116  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:42.696223  528764 cri.go:89] found id: ""
	I1217 20:38:42.696237  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.696244  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:42.696251  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:42.696310  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:42.722579  528764 cri.go:89] found id: ""
	I1217 20:38:42.722593  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.722606  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:42.722612  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:42.722668  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:42.747677  528764 cri.go:89] found id: ""
	I1217 20:38:42.747690  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.747698  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:42.747703  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:42.747764  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:42.774015  528764 cri.go:89] found id: ""
	I1217 20:38:42.774029  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.774036  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:42.774053  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:42.774112  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:42.799502  528764 cri.go:89] found id: ""
	I1217 20:38:42.799516  528764 logs.go:282] 0 containers: []
	W1217 20:38:42.799525  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:42.799533  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:42.799543  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:42.865035  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:42.865058  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:42.880616  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:42.880633  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:42.949493  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:42.939951   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.940704   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.942455   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.943033   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.944768   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:42.939951   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.940704   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.942455   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.943033   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:42.944768   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:42.949505  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:42.949528  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:43.019292  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:43.019312  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:45.548705  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:45.558968  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:45.559027  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:45.583967  528764 cri.go:89] found id: ""
	I1217 20:38:45.583982  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.583989  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:45.583994  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:45.584050  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:45.609420  528764 cri.go:89] found id: ""
	I1217 20:38:45.609434  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.609441  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:45.609447  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:45.609508  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:45.640522  528764 cri.go:89] found id: ""
	I1217 20:38:45.640546  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.640554  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:45.640559  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:45.640625  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:45.666349  528764 cri.go:89] found id: ""
	I1217 20:38:45.666362  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.666369  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:45.666375  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:45.666432  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:45.696168  528764 cri.go:89] found id: ""
	I1217 20:38:45.696182  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.696189  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:45.696194  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:45.696255  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:45.719763  528764 cri.go:89] found id: ""
	I1217 20:38:45.719777  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.719784  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:45.719790  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:45.719847  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:45.744391  528764 cri.go:89] found id: ""
	I1217 20:38:45.744405  528764 logs.go:282] 0 containers: []
	W1217 20:38:45.744412  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:45.744421  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:45.744451  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:45.809635  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:45.809656  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:45.824260  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:45.824275  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:45.887725  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:45.879670   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.880327   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.881887   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.882340   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.883862   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:45.879670   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.880327   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.881887   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.882340   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:45.883862   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:45.887735  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:45.887746  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:45.955422  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:45.955441  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:48.485624  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:48.495313  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:48.495374  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:48.520059  528764 cri.go:89] found id: ""
	I1217 20:38:48.520074  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.520081  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:48.520087  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:48.520143  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:48.545655  528764 cri.go:89] found id: ""
	I1217 20:38:48.545670  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.545677  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:48.545682  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:48.545740  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:48.570521  528764 cri.go:89] found id: ""
	I1217 20:38:48.570535  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.570543  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:48.570548  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:48.570606  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:48.596861  528764 cri.go:89] found id: ""
	I1217 20:38:48.596875  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.596883  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:48.596888  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:48.596946  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:48.623093  528764 cri.go:89] found id: ""
	I1217 20:38:48.623115  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.623123  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:48.623128  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:48.623203  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:48.648854  528764 cri.go:89] found id: ""
	I1217 20:38:48.648868  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.648876  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:48.648881  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:48.648953  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:48.673887  528764 cri.go:89] found id: ""
	I1217 20:38:48.673911  528764 logs.go:282] 0 containers: []
	W1217 20:38:48.673919  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:48.673928  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:48.673939  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:48.739985  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:48.740004  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:48.754655  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:48.754672  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:48.818714  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:48.810661   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.811171   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.812860   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.813319   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.814815   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:48.810661   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.811171   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.812860   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.813319   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:48.814815   14706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:48.818724  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:48.818734  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:48.889255  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:48.889281  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:51.421767  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:51.432066  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:51.432137  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:51.461100  528764 cri.go:89] found id: ""
	I1217 20:38:51.461115  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.461123  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:51.461132  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:51.461205  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:51.493482  528764 cri.go:89] found id: ""
	I1217 20:38:51.493495  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.493503  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:51.493508  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:51.493573  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:51.523360  528764 cri.go:89] found id: ""
	I1217 20:38:51.523374  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.523382  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:51.523387  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:51.523443  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:51.549129  528764 cri.go:89] found id: ""
	I1217 20:38:51.549143  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.549151  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:51.549156  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:51.549212  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:51.575573  528764 cri.go:89] found id: ""
	I1217 20:38:51.575613  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.575621  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:51.575631  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:51.575698  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:51.601059  528764 cri.go:89] found id: ""
	I1217 20:38:51.601074  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.601081  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:51.601087  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:51.601153  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:51.626446  528764 cri.go:89] found id: ""
	I1217 20:38:51.626461  528764 logs.go:282] 0 containers: []
	W1217 20:38:51.626468  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:51.626476  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:51.626487  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:51.693973  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:51.693993  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:51.724023  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:51.724039  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:51.788885  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:51.788906  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:51.803552  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:51.803568  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:51.866022  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:51.858220   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.858930   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.860542   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.860857   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.862309   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:51.858220   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.858930   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.860542   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.860857   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:51.862309   14821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:54.367685  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:54.378312  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:54.378367  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:54.407726  528764 cri.go:89] found id: ""
	I1217 20:38:54.407744  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.407752  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:54.407758  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:54.407815  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:54.432535  528764 cri.go:89] found id: ""
	I1217 20:38:54.432550  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.432557  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:54.432562  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:54.432623  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:54.458438  528764 cri.go:89] found id: ""
	I1217 20:38:54.458453  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.458460  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:54.458465  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:54.458527  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:54.487170  528764 cri.go:89] found id: ""
	I1217 20:38:54.487184  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.487191  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:54.487198  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:54.487254  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:54.512876  528764 cri.go:89] found id: ""
	I1217 20:38:54.512890  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.512897  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:54.512902  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:54.512959  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:54.537031  528764 cri.go:89] found id: ""
	I1217 20:38:54.537044  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.537051  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:54.537056  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:54.537112  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:54.562349  528764 cri.go:89] found id: ""
	I1217 20:38:54.562363  528764 logs.go:282] 0 containers: []
	W1217 20:38:54.562387  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:54.562396  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:54.562406  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:54.628118  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:54.628137  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:54.642915  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:54.642932  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:54.707130  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:54.699152   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.699635   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.701269   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.701677   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.703119   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:54.699152   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.699635   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.701269   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.701677   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:54.703119   14913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:54.707141  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:54.707152  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:54.775317  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:54.775338  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:38:57.310952  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:38:57.322922  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:38:57.322983  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:38:57.357392  528764 cri.go:89] found id: ""
	I1217 20:38:57.357406  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.357413  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:38:57.357420  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:38:57.357476  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:38:57.384349  528764 cri.go:89] found id: ""
	I1217 20:38:57.384363  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.384373  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:38:57.384378  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:38:57.384434  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:38:57.412576  528764 cri.go:89] found id: ""
	I1217 20:38:57.412590  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.412598  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:38:57.412603  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:38:57.412662  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:38:57.439190  528764 cri.go:89] found id: ""
	I1217 20:38:57.439205  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.439212  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:38:57.439217  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:38:57.439305  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:38:57.466239  528764 cri.go:89] found id: ""
	I1217 20:38:57.466253  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.466262  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:38:57.466267  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:38:57.466324  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:38:57.491495  528764 cri.go:89] found id: ""
	I1217 20:38:57.491508  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.491516  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:38:57.491522  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:38:57.491597  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:38:57.517009  528764 cri.go:89] found id: ""
	I1217 20:38:57.517023  528764 logs.go:282] 0 containers: []
	W1217 20:38:57.517030  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:38:57.517038  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:38:57.517048  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:38:57.582648  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:38:57.582669  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:38:57.597231  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:38:57.597249  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:38:57.663163  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:38:57.654987   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.655397   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.656981   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.657561   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.659204   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:38:57.654987   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.655397   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.656981   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.657561   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:38:57.659204   15018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:38:57.663174  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:38:57.663186  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:38:57.735126  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:38:57.735151  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:00.265877  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:00.292750  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:00.292841  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:00.342493  528764 cri.go:89] found id: ""
	I1217 20:39:00.342529  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.342553  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:00.342560  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:00.342673  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:00.389833  528764 cri.go:89] found id: ""
	I1217 20:39:00.389858  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.389866  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:00.389871  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:00.389943  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:00.427417  528764 cri.go:89] found id: ""
	I1217 20:39:00.427442  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.427450  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:00.427455  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:00.427525  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:00.455698  528764 cri.go:89] found id: ""
	I1217 20:39:00.455712  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.455720  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:00.455726  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:00.455784  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:00.487535  528764 cri.go:89] found id: ""
	I1217 20:39:00.487551  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.487558  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:00.487576  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:00.487666  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:00.514228  528764 cri.go:89] found id: ""
	I1217 20:39:00.514243  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.514251  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:00.514256  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:00.514315  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:00.540536  528764 cri.go:89] found id: ""
	I1217 20:39:00.540561  528764 logs.go:282] 0 containers: []
	W1217 20:39:00.540569  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:00.540576  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:00.540586  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:00.607064  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:00.607084  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:00.639882  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:00.639899  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:00.705607  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:00.705629  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:00.721491  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:00.721506  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:00.784593  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:00.776120   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.776725   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.778453   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.778972   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.780702   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:00.776120   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.776725   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.778453   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.778972   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:00.780702   15140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:03.284822  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:03.295036  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:03.295097  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:03.333750  528764 cri.go:89] found id: ""
	I1217 20:39:03.333778  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.333786  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:03.333792  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:03.333861  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:03.363983  528764 cri.go:89] found id: ""
	I1217 20:39:03.363997  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.364004  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:03.364024  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:03.364082  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:03.392963  528764 cri.go:89] found id: ""
	I1217 20:39:03.392977  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.392984  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:03.392989  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:03.393044  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:03.419023  528764 cri.go:89] found id: ""
	I1217 20:39:03.419039  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.419046  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:03.419052  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:03.419108  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:03.444813  528764 cri.go:89] found id: ""
	I1217 20:39:03.444826  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.444833  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:03.444838  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:03.444895  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:03.468964  528764 cri.go:89] found id: ""
	I1217 20:39:03.468978  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.468986  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:03.468996  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:03.469053  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:03.494050  528764 cri.go:89] found id: ""
	I1217 20:39:03.494063  528764 logs.go:282] 0 containers: []
	W1217 20:39:03.494071  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:03.494078  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:03.494087  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:03.559830  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:03.559849  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:03.575390  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:03.575407  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:03.642132  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:03.634093   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.634724   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.636305   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.636854   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.638302   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:03.634093   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.634724   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.636305   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.636854   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:03.638302   15231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:03.642142  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:03.642153  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:03.710317  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:03.710339  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:06.242034  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:06.252695  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:06.252759  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:06.278446  528764 cri.go:89] found id: ""
	I1217 20:39:06.278460  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.278467  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:06.278477  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:06.278573  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:06.304597  528764 cri.go:89] found id: ""
	I1217 20:39:06.304612  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.304620  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:06.304630  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:06.304702  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:06.345678  528764 cri.go:89] found id: ""
	I1217 20:39:06.345693  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.345700  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:06.345706  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:06.345764  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:06.381455  528764 cri.go:89] found id: ""
	I1217 20:39:06.381469  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.381476  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:06.381482  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:06.381542  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:06.410677  528764 cri.go:89] found id: ""
	I1217 20:39:06.410691  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.410698  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:06.410704  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:06.410774  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:06.436535  528764 cri.go:89] found id: ""
	I1217 20:39:06.436549  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.436556  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:06.436564  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:06.436621  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:06.467306  528764 cri.go:89] found id: ""
	I1217 20:39:06.467320  528764 logs.go:282] 0 containers: []
	W1217 20:39:06.467327  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:06.467335  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:06.467345  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:06.533557  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:06.533577  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:06.548883  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:06.548901  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:06.613032  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:06.604590   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.605314   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.606990   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.607539   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.609092   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:06.604590   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.605314   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.606990   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.607539   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:06.609092   15336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:06.613048  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:06.613068  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:06.682237  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:06.682258  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:09.211382  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:09.221300  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:09.221359  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:09.246764  528764 cri.go:89] found id: ""
	I1217 20:39:09.246778  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.246785  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:09.246790  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:09.246867  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:09.271248  528764 cri.go:89] found id: ""
	I1217 20:39:09.271261  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.271268  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:09.271273  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:09.271343  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:09.296093  528764 cri.go:89] found id: ""
	I1217 20:39:09.296107  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.296114  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:09.296120  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:09.296175  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:09.325215  528764 cri.go:89] found id: ""
	I1217 20:39:09.325230  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.325236  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:09.325241  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:09.325304  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:09.352141  528764 cri.go:89] found id: ""
	I1217 20:39:09.352155  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.352162  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:09.352167  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:09.352237  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:09.383006  528764 cri.go:89] found id: ""
	I1217 20:39:09.383021  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.383028  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:09.383034  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:09.383113  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:09.414504  528764 cri.go:89] found id: ""
	I1217 20:39:09.414518  528764 logs.go:282] 0 containers: []
	W1217 20:39:09.414526  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:09.414534  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:09.414566  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:09.483870  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:09.483889  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:09.498851  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:09.498867  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:09.569431  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:09.561559   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.562122   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.563640   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.564216   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.565635   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:09.561559   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.562122   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.563640   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.564216   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:09.565635   15441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:09.569442  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:09.569452  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:09.636946  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:09.636966  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:12.165906  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:12.176117  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:12.176184  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:12.202030  528764 cri.go:89] found id: ""
	I1217 20:39:12.202043  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.202051  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:12.202056  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:12.202111  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:12.230473  528764 cri.go:89] found id: ""
	I1217 20:39:12.230487  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.230495  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:12.230500  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:12.230559  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:12.256663  528764 cri.go:89] found id: ""
	I1217 20:39:12.256677  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.256685  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:12.256690  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:12.256747  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:12.284083  528764 cri.go:89] found id: ""
	I1217 20:39:12.284096  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.284104  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:12.284109  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:12.284168  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:12.309047  528764 cri.go:89] found id: ""
	I1217 20:39:12.309062  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.309070  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:12.309075  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:12.309134  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:12.351942  528764 cri.go:89] found id: ""
	I1217 20:39:12.351957  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.351969  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:12.351975  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:12.352034  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:12.390734  528764 cri.go:89] found id: ""
	I1217 20:39:12.390765  528764 logs.go:282] 0 containers: []
	W1217 20:39:12.390773  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:12.390782  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:12.390793  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:12.456083  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:12.456103  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:12.471218  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:12.471239  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:12.538690  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:12.527207   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.527702   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.529370   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.529843   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.531751   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:12.527207   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.527702   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.529370   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.529843   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:12.531751   15546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:12.538707  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:12.538718  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:12.605751  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:12.605772  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:15.135835  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:15.146221  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:15.146280  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:15.176272  528764 cri.go:89] found id: ""
	I1217 20:39:15.176286  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.176294  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:15.176301  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:15.176357  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:15.206452  528764 cri.go:89] found id: ""
	I1217 20:39:15.206466  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.206474  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:15.206479  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:15.206548  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:15.231899  528764 cri.go:89] found id: ""
	I1217 20:39:15.231914  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.231921  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:15.231927  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:15.231996  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:15.257093  528764 cri.go:89] found id: ""
	I1217 20:39:15.257106  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.257113  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:15.257119  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:15.257174  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:15.281692  528764 cri.go:89] found id: ""
	I1217 20:39:15.281706  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.281714  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:15.281719  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:15.281777  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:15.310093  528764 cri.go:89] found id: ""
	I1217 20:39:15.310107  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.310114  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:15.310119  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:15.310193  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:15.349800  528764 cri.go:89] found id: ""
	I1217 20:39:15.349813  528764 logs.go:282] 0 containers: []
	W1217 20:39:15.349830  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:15.349839  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:15.349850  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:15.426883  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:15.426904  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:15.442044  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:15.442059  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:15.512531  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:15.503665   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.504418   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.506067   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.506537   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.508077   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:15.503665   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.504418   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.506067   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.506537   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:15.508077   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:15.512542  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:15.512554  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:15.587396  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:15.587422  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:18.121184  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:18.131563  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:18.131644  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:18.157091  528764 cri.go:89] found id: ""
	I1217 20:39:18.157105  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.157113  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:18.157118  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:18.157177  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:18.183414  528764 cri.go:89] found id: ""
	I1217 20:39:18.183428  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.183452  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:18.183457  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:18.183523  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:18.210558  528764 cri.go:89] found id: ""
	I1217 20:39:18.210586  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.210595  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:18.210600  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:18.210667  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:18.236623  528764 cri.go:89] found id: ""
	I1217 20:39:18.236653  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.236661  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:18.236666  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:18.236730  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:18.263889  528764 cri.go:89] found id: ""
	I1217 20:39:18.263903  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.263911  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:18.263916  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:18.263977  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:18.289661  528764 cri.go:89] found id: ""
	I1217 20:39:18.289675  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.289683  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:18.289688  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:18.289743  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:18.314115  528764 cri.go:89] found id: ""
	I1217 20:39:18.314129  528764 logs.go:282] 0 containers: []
	W1217 20:39:18.314136  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:18.314143  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:18.314165  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:18.382890  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:18.382909  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:18.425251  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:18.425268  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:18.493317  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:18.493336  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:18.509454  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:18.509470  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:18.571731  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:18.563250   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.563867   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.564819   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.566275   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.566732   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:18.563250   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.563867   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.564819   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.566275   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:18.566732   15771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:21.073445  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:21.083815  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:21.083874  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:21.113281  528764 cri.go:89] found id: ""
	I1217 20:39:21.113295  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.113302  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:21.113307  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:21.113365  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:21.142024  528764 cri.go:89] found id: ""
	I1217 20:39:21.142039  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.142046  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:21.142059  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:21.142123  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:21.170658  528764 cri.go:89] found id: ""
	I1217 20:39:21.170678  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.170686  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:21.170691  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:21.170756  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:21.196194  528764 cri.go:89] found id: ""
	I1217 20:39:21.196207  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.196214  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:21.196220  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:21.196277  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:21.222255  528764 cri.go:89] found id: ""
	I1217 20:39:21.222269  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.222276  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:21.222282  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:21.222355  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:21.247912  528764 cri.go:89] found id: ""
	I1217 20:39:21.247926  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.247933  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:21.247939  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:21.247996  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:21.278136  528764 cri.go:89] found id: ""
	I1217 20:39:21.278151  528764 logs.go:282] 0 containers: []
	W1217 20:39:21.278158  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:21.278175  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:21.278187  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:21.346881  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:21.346899  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:21.363101  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:21.363117  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:21.431000  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:21.421965   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.422735   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.424535   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.425238   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.426896   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:21.421965   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.422735   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.424535   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.425238   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:21.426896   15864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:21.431011  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:21.431024  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:21.499494  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:21.499512  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:24.028859  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:24.039467  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:24.039528  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:24.065108  528764 cri.go:89] found id: ""
	I1217 20:39:24.065122  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.065130  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:24.065135  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:24.065193  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:24.090624  528764 cri.go:89] found id: ""
	I1217 20:39:24.090638  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.090647  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:24.090652  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:24.090710  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:24.116315  528764 cri.go:89] found id: ""
	I1217 20:39:24.116331  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.116339  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:24.116345  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:24.116414  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:24.141792  528764 cri.go:89] found id: ""
	I1217 20:39:24.141806  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.141813  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:24.141818  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:24.141877  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:24.170297  528764 cri.go:89] found id: ""
	I1217 20:39:24.170310  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.170318  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:24.170324  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:24.170378  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:24.199383  528764 cri.go:89] found id: ""
	I1217 20:39:24.199397  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.199404  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:24.199411  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:24.199477  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:24.224443  528764 cri.go:89] found id: ""
	I1217 20:39:24.224457  528764 logs.go:282] 0 containers: []
	W1217 20:39:24.224464  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:24.224471  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:24.224496  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:24.253379  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:24.253396  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:24.322404  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:24.322423  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:24.340551  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:24.340569  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:24.409290  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:24.400697   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.401399   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.402586   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.403250   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.404963   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:24.400697   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.401399   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.402586   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.403250   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:24.404963   15977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:24.409305  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:24.409316  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:26.976820  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:26.986804  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:26.986885  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:27.015438  528764 cri.go:89] found id: ""
	I1217 20:39:27.015453  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.015460  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:27.015466  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:27.015545  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:27.041591  528764 cri.go:89] found id: ""
	I1217 20:39:27.041605  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.041613  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:27.041619  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:27.041680  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:27.066798  528764 cri.go:89] found id: ""
	I1217 20:39:27.066812  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.066819  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:27.066851  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:27.066908  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:27.091716  528764 cri.go:89] found id: ""
	I1217 20:39:27.091730  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.091737  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:27.091743  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:27.091797  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:27.116523  528764 cri.go:89] found id: ""
	I1217 20:39:27.116536  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.116544  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:27.116550  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:27.116612  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:27.140982  528764 cri.go:89] found id: ""
	I1217 20:39:27.140996  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.141004  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:27.141009  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:27.141064  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:27.170754  528764 cri.go:89] found id: ""
	I1217 20:39:27.170769  528764 logs.go:282] 0 containers: []
	W1217 20:39:27.170777  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:27.170784  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:27.170805  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:27.234403  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:27.226295   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.226681   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.228247   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.228832   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.230386   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:27.226295   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.226681   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.228247   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.228832   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:27.230386   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:27.234413  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:27.234463  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:27.306551  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:27.306570  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:27.342575  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:27.342597  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:27.416305  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:27.416325  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:29.931568  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:29.941696  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:29.941790  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:29.970561  528764 cri.go:89] found id: ""
	I1217 20:39:29.970576  528764 logs.go:282] 0 containers: []
	W1217 20:39:29.970583  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:29.970588  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:29.970644  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:29.995538  528764 cri.go:89] found id: ""
	I1217 20:39:29.995551  528764 logs.go:282] 0 containers: []
	W1217 20:39:29.995559  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:29.995564  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:29.995645  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:30.047472  528764 cri.go:89] found id: ""
	I1217 20:39:30.047487  528764 logs.go:282] 0 containers: []
	W1217 20:39:30.047496  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:30.047501  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:30.047568  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:30.077580  528764 cri.go:89] found id: ""
	I1217 20:39:30.077595  528764 logs.go:282] 0 containers: []
	W1217 20:39:30.077603  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:30.077609  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:30.077686  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:30.111544  528764 cri.go:89] found id: ""
	I1217 20:39:30.111574  528764 logs.go:282] 0 containers: []
	W1217 20:39:30.111618  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:30.111624  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:30.111705  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:30.139478  528764 cri.go:89] found id: ""
	I1217 20:39:30.139504  528764 logs.go:282] 0 containers: []
	W1217 20:39:30.139513  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:30.139518  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:30.139611  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:30.169107  528764 cri.go:89] found id: ""
	I1217 20:39:30.169121  528764 logs.go:282] 0 containers: []
	W1217 20:39:30.169128  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:30.169136  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:30.169146  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:30.234963  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:30.234982  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:30.250550  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:30.250577  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:30.320870  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:30.310827   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.311705   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.313455   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.314025   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.315795   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:30.310827   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.311705   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.313455   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.314025   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:30.315795   16172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:30.320884  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:30.320894  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:30.397776  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:30.397796  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:32.932751  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:32.942813  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:32.942885  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:32.968405  528764 cri.go:89] found id: ""
	I1217 20:39:32.968418  528764 logs.go:282] 0 containers: []
	W1217 20:39:32.968425  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:32.968431  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:32.968503  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:32.991973  528764 cri.go:89] found id: ""
	I1217 20:39:32.991987  528764 logs.go:282] 0 containers: []
	W1217 20:39:32.991994  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:32.992005  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:32.992063  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:33.019478  528764 cri.go:89] found id: ""
	I1217 20:39:33.019492  528764 logs.go:282] 0 containers: []
	W1217 20:39:33.019500  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:33.019505  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:33.019572  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:33.044942  528764 cri.go:89] found id: ""
	I1217 20:39:33.044958  528764 logs.go:282] 0 containers: []
	W1217 20:39:33.044965  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:33.044970  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:33.045028  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:33.072242  528764 cri.go:89] found id: ""
	I1217 20:39:33.072256  528764 logs.go:282] 0 containers: []
	W1217 20:39:33.072263  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:33.072268  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:33.072332  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:33.101598  528764 cri.go:89] found id: ""
	I1217 20:39:33.101611  528764 logs.go:282] 0 containers: []
	W1217 20:39:33.101619  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:33.101624  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:33.101677  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:33.127765  528764 cri.go:89] found id: ""
	I1217 20:39:33.127780  528764 logs.go:282] 0 containers: []
	W1217 20:39:33.127805  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:33.127813  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:33.127830  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:33.193505  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:33.193524  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:33.209404  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:33.209419  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:33.278213  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:33.269512   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.270341   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.272086   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.272605   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.274151   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:33.269512   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.270341   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.272086   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.272605   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:33.274151   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:33.278224  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:33.278234  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:33.352890  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:33.352911  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:35.892717  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:35.902865  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:35.902923  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:35.927963  528764 cri.go:89] found id: ""
	I1217 20:39:35.927977  528764 logs.go:282] 0 containers: []
	W1217 20:39:35.927985  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:35.927990  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:35.928047  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:35.953995  528764 cri.go:89] found id: ""
	I1217 20:39:35.954010  528764 logs.go:282] 0 containers: []
	W1217 20:39:35.954017  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:35.954022  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:35.954078  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:35.978944  528764 cri.go:89] found id: ""
	I1217 20:39:35.978958  528764 logs.go:282] 0 containers: []
	W1217 20:39:35.978965  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:35.978971  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:35.979027  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:36.009908  528764 cri.go:89] found id: ""
	I1217 20:39:36.009923  528764 logs.go:282] 0 containers: []
	W1217 20:39:36.009932  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:36.009938  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:36.010005  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:36.036093  528764 cri.go:89] found id: ""
	I1217 20:39:36.036106  528764 logs.go:282] 0 containers: []
	W1217 20:39:36.036114  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:36.036125  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:36.036189  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:36.064858  528764 cri.go:89] found id: ""
	I1217 20:39:36.064873  528764 logs.go:282] 0 containers: []
	W1217 20:39:36.064880  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:36.064888  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:36.064943  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:36.091213  528764 cri.go:89] found id: ""
	I1217 20:39:36.091228  528764 logs.go:282] 0 containers: []
	W1217 20:39:36.091236  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:36.091243  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:36.091265  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:36.123131  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:36.123147  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:36.192190  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:36.192209  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:36.207423  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:36.207441  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:36.274672  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:36.265622   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.266359   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.267351   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.268947   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.269621   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:36.265622   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.266359   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.267351   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.268947   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:36.269621   16394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:36.274682  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:36.274693  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:38.848137  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:38.858186  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:38.858245  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:38.887476  528764 cri.go:89] found id: ""
	I1217 20:39:38.887491  528764 logs.go:282] 0 containers: []
	W1217 20:39:38.887498  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:38.887503  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:38.887559  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:38.913669  528764 cri.go:89] found id: ""
	I1217 20:39:38.913683  528764 logs.go:282] 0 containers: []
	W1217 20:39:38.913691  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:38.913696  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:38.913753  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:38.938922  528764 cri.go:89] found id: ""
	I1217 20:39:38.938937  528764 logs.go:282] 0 containers: []
	W1217 20:39:38.938945  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:38.938950  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:38.939010  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:38.964782  528764 cri.go:89] found id: ""
	I1217 20:39:38.964796  528764 logs.go:282] 0 containers: []
	W1217 20:39:38.964804  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:38.964809  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:38.964869  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:38.990990  528764 cri.go:89] found id: ""
	I1217 20:39:38.991004  528764 logs.go:282] 0 containers: []
	W1217 20:39:38.991012  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:38.991017  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:38.991087  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:39.019624  528764 cri.go:89] found id: ""
	I1217 20:39:39.019638  528764 logs.go:282] 0 containers: []
	W1217 20:39:39.019645  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:39.019651  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:39.019712  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:39.049943  528764 cri.go:89] found id: ""
	I1217 20:39:39.049957  528764 logs.go:282] 0 containers: []
	W1217 20:39:39.049964  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:39.049971  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:39.049982  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:39.114679  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:39.114699  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:39.129526  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:39.129544  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:39.192131  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:39.184273   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.185000   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.186617   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.186938   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.188434   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:39.184273   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.185000   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.186617   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.186938   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:39.188434   16490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:39.192141  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:39.192151  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:39.262829  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:39.262849  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:41.796129  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:41.805988  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:41.806050  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:41.830659  528764 cri.go:89] found id: ""
	I1217 20:39:41.830688  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.830696  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:41.830702  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:41.830772  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:41.855846  528764 cri.go:89] found id: ""
	I1217 20:39:41.855861  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.855868  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:41.855874  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:41.855937  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:41.880126  528764 cri.go:89] found id: ""
	I1217 20:39:41.880139  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.880147  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:41.880151  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:41.880205  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:41.909006  528764 cri.go:89] found id: ""
	I1217 20:39:41.909020  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.909027  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:41.909032  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:41.909088  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:41.938559  528764 cri.go:89] found id: ""
	I1217 20:39:41.938573  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.938580  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:41.938585  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:41.938646  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:41.966291  528764 cri.go:89] found id: ""
	I1217 20:39:41.966305  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.966312  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:41.966317  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:41.966380  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:41.991150  528764 cri.go:89] found id: ""
	I1217 20:39:41.991164  528764 logs.go:282] 0 containers: []
	W1217 20:39:41.991172  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:41.991180  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:41.991190  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:42.024918  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:42.024936  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:42.094047  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:42.094069  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:42.113717  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:42.113737  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:42.191163  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:42.180141   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.180682   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.183783   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.184295   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.186218   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:42.180141   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.180682   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.183783   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.184295   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:42.186218   16604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:42.191176  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:42.191195  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:44.772767  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:44.783138  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:44.783204  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:44.811282  528764 cri.go:89] found id: ""
	I1217 20:39:44.811296  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.811304  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:44.811309  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:44.811369  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:44.838690  528764 cri.go:89] found id: ""
	I1217 20:39:44.838704  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.838711  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:44.838717  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:44.838776  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:44.866668  528764 cri.go:89] found id: ""
	I1217 20:39:44.866683  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.866690  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:44.866696  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:44.866751  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:44.892383  528764 cri.go:89] found id: ""
	I1217 20:39:44.892397  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.892405  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:44.892410  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:44.892468  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:44.921797  528764 cri.go:89] found id: ""
	I1217 20:39:44.921812  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.921819  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:44.921825  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:44.921885  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:44.947362  528764 cri.go:89] found id: ""
	I1217 20:39:44.947376  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.947384  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:44.947389  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:44.947446  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:44.974284  528764 cri.go:89] found id: ""
	I1217 20:39:44.974297  528764 logs.go:282] 0 containers: []
	W1217 20:39:44.974305  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:44.974312  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:44.974323  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:45.077487  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:45.067021   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.068124   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.068971   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.071090   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.072380   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:45.067021   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.068124   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.068971   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.071090   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:45.072380   16692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:45.077499  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:45.077511  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:45.185472  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:45.185499  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:45.244734  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:45.244753  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:45.320383  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:45.320403  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:47.839254  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:47.849450  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:47.849509  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:47.878517  528764 cri.go:89] found id: ""
	I1217 20:39:47.878531  528764 logs.go:282] 0 containers: []
	W1217 20:39:47.878539  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:47.878554  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:47.878612  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:47.904739  528764 cri.go:89] found id: ""
	I1217 20:39:47.904754  528764 logs.go:282] 0 containers: []
	W1217 20:39:47.904762  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:47.904767  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:47.904823  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:47.929572  528764 cri.go:89] found id: ""
	I1217 20:39:47.929586  528764 logs.go:282] 0 containers: []
	W1217 20:39:47.929593  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:47.929599  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:47.929658  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:47.958617  528764 cri.go:89] found id: ""
	I1217 20:39:47.958631  528764 logs.go:282] 0 containers: []
	W1217 20:39:47.958639  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:47.958644  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:47.958701  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:47.984420  528764 cri.go:89] found id: ""
	I1217 20:39:47.984434  528764 logs.go:282] 0 containers: []
	W1217 20:39:47.984441  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:47.984447  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:47.984504  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:48.013373  528764 cri.go:89] found id: ""
	I1217 20:39:48.013389  528764 logs.go:282] 0 containers: []
	W1217 20:39:48.013396  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:48.013402  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:48.013461  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:48.040700  528764 cri.go:89] found id: ""
	I1217 20:39:48.040713  528764 logs.go:282] 0 containers: []
	W1217 20:39:48.040720  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:48.040728  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:48.040740  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:48.112503  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:48.112522  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:48.148498  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:48.148514  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:48.215575  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:48.215644  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:48.230769  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:48.230785  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:48.305622  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:48.297446   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.298244   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.299754   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.300303   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.301821   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:48.297446   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.298244   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.299754   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.300303   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:48.301821   16820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:50.807281  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:50.819012  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:50.819075  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:50.845131  528764 cri.go:89] found id: ""
	I1217 20:39:50.845145  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.845153  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:50.845158  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:50.845215  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:50.878758  528764 cri.go:89] found id: ""
	I1217 20:39:50.878771  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.878778  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:50.878783  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:50.878851  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:50.905139  528764 cri.go:89] found id: ""
	I1217 20:39:50.905154  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.905161  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:50.905167  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:50.905234  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:50.930885  528764 cri.go:89] found id: ""
	I1217 20:39:50.930898  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.930923  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:50.930928  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:50.931004  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:50.961249  528764 cri.go:89] found id: ""
	I1217 20:39:50.961264  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.961271  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:50.961281  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:50.961339  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:50.990268  528764 cri.go:89] found id: ""
	I1217 20:39:50.990283  528764 logs.go:282] 0 containers: []
	W1217 20:39:50.990290  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:50.990305  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:50.990368  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:51.022220  528764 cri.go:89] found id: ""
	I1217 20:39:51.022235  528764 logs.go:282] 0 containers: []
	W1217 20:39:51.022253  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:51.022260  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:51.022272  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:51.037279  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:51.037301  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:51.104091  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:51.095357   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.096330   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.098113   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.098463   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.100108   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:51.095357   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.096330   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.098113   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.098463   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:51.100108   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:51.104101  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:51.104112  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:51.170651  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:51.170674  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:51.200399  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:51.200421  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:53.770767  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:53.780793  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:53.780851  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:53.809348  528764 cri.go:89] found id: ""
	I1217 20:39:53.809362  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.809370  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:53.809375  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:53.809441  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:53.834689  528764 cri.go:89] found id: ""
	I1217 20:39:53.834703  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.834710  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:53.834716  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:53.834772  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:53.861465  528764 cri.go:89] found id: ""
	I1217 20:39:53.861483  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.861491  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:53.861498  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:53.861562  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:53.891732  528764 cri.go:89] found id: ""
	I1217 20:39:53.891747  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.891754  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:53.891759  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:53.891817  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:53.917938  528764 cri.go:89] found id: ""
	I1217 20:39:53.917952  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.917959  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:53.917964  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:53.918024  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:53.943397  528764 cri.go:89] found id: ""
	I1217 20:39:53.943412  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.943420  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:53.943431  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:53.943500  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:53.970499  528764 cri.go:89] found id: ""
	I1217 20:39:53.970514  528764 logs.go:282] 0 containers: []
	W1217 20:39:53.970521  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:53.970529  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:53.970540  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:54.037615  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:54.028803   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.030066   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.030771   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.032113   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.032669   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:54.028803   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.030066   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.030771   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.032113   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:54.032669   17010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:54.037625  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:54.037637  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:54.105683  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:54.105702  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:54.135408  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:54.135424  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:54.201915  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:54.201934  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:56.717571  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:56.727576  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:56.727663  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:56.752566  528764 cri.go:89] found id: ""
	I1217 20:39:56.752580  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.752587  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:56.752593  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:56.752649  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:56.778100  528764 cri.go:89] found id: ""
	I1217 20:39:56.778114  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.778123  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:56.778128  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:56.778188  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:56.810564  528764 cri.go:89] found id: ""
	I1217 20:39:56.810578  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.810585  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:56.810590  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:56.810651  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:56.836110  528764 cri.go:89] found id: ""
	I1217 20:39:56.836123  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.836130  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:56.836136  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:56.836192  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:56.860819  528764 cri.go:89] found id: ""
	I1217 20:39:56.860833  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.860840  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:56.860845  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:56.860910  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:56.885378  528764 cri.go:89] found id: ""
	I1217 20:39:56.885392  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.885400  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:56.885405  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:56.885464  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:56.910636  528764 cri.go:89] found id: ""
	I1217 20:39:56.910649  528764 logs.go:282] 0 containers: []
	W1217 20:39:56.910657  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:56.910664  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:56.910685  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:56.975973  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:56.975994  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:56.990897  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:56.990913  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:39:57.059420  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:39:57.050613   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.051527   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.053283   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.053833   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.055383   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:39:57.050613   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.051527   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.053283   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.053833   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:39:57.055383   17117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:39:57.059434  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:39:57.059444  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:39:57.127559  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:57.127588  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:59.660834  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:39:59.671347  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:39:59.671409  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:39:59.697317  528764 cri.go:89] found id: ""
	I1217 20:39:59.697331  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.697338  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:39:59.697344  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:39:59.697400  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:39:59.721571  528764 cri.go:89] found id: ""
	I1217 20:39:59.721586  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.721593  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:39:59.721601  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:39:59.721663  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:39:59.746819  528764 cri.go:89] found id: ""
	I1217 20:39:59.746835  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.746843  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:39:59.746849  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:39:59.746909  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:39:59.773034  528764 cri.go:89] found id: ""
	I1217 20:39:59.773049  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.773057  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:39:59.773062  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:39:59.773123  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:39:59.802418  528764 cri.go:89] found id: ""
	I1217 20:39:59.802441  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.802449  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:39:59.802454  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:39:59.802524  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:39:59.831711  528764 cri.go:89] found id: ""
	I1217 20:39:59.831725  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.831733  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:39:59.831739  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:39:59.831804  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:39:59.856953  528764 cri.go:89] found id: ""
	I1217 20:39:59.856967  528764 logs.go:282] 0 containers: []
	W1217 20:39:59.856975  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:39:59.856982  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:39:59.856995  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:39:59.884897  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:39:59.884914  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:39:59.949655  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:39:59.949677  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:39:59.964501  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:39:59.964517  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:40:00.094107  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:40:00.057057   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.058079   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.081049   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.085379   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.086028   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:40:00.057057   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.058079   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.081049   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.085379   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:00.086028   17231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:40:00.094120  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:40:00.094132  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:40:02.787739  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:40:02.797830  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:40:02.797894  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:40:02.834082  528764 cri.go:89] found id: ""
	I1217 20:40:02.834096  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.834104  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:40:02.834109  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:40:02.834168  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:40:02.866743  528764 cri.go:89] found id: ""
	I1217 20:40:02.866756  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.866763  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:40:02.866768  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:40:02.866837  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:40:02.895045  528764 cri.go:89] found id: ""
	I1217 20:40:02.895058  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.895066  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:40:02.895071  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:40:02.895126  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:40:02.921557  528764 cri.go:89] found id: ""
	I1217 20:40:02.921570  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.921580  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:40:02.921585  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:40:02.921641  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:40:02.952647  528764 cri.go:89] found id: ""
	I1217 20:40:02.952661  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.952669  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:40:02.952675  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:40:02.952733  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:40:02.983298  528764 cri.go:89] found id: ""
	I1217 20:40:02.983312  528764 logs.go:282] 0 containers: []
	W1217 20:40:02.983319  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:40:02.983325  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:40:02.983389  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:40:03.010550  528764 cri.go:89] found id: ""
	I1217 20:40:03.010565  528764 logs.go:282] 0 containers: []
	W1217 20:40:03.010573  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:40:03.010581  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:40:03.010592  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:40:03.079310  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:40:03.079329  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:40:03.094479  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:40:03.094497  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:40:03.161221  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:40:03.151895   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.152622   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.154348   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.155044   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.156824   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:40:03.151895   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.152622   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.154348   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.155044   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:03.156824   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:40:03.161231  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:40:03.161242  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:40:03.227816  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:40:03.227835  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:40:05.757487  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:40:05.767711  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:40:05.767773  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:40:05.793946  528764 cri.go:89] found id: ""
	I1217 20:40:05.793960  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.793972  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:40:05.793978  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:40:05.794036  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:40:05.822285  528764 cri.go:89] found id: ""
	I1217 20:40:05.822299  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.822306  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:40:05.822314  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:40:05.822371  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:40:05.850250  528764 cri.go:89] found id: ""
	I1217 20:40:05.850264  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.850271  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:40:05.850277  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:40:05.850335  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:40:05.895396  528764 cri.go:89] found id: ""
	I1217 20:40:05.895410  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.895417  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:40:05.895422  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:40:05.895477  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:40:05.922557  528764 cri.go:89] found id: ""
	I1217 20:40:05.922571  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.922580  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:40:05.922586  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:40:05.922644  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:40:05.948573  528764 cri.go:89] found id: ""
	I1217 20:40:05.948586  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.948594  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:40:05.948599  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:40:05.948655  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:40:05.975477  528764 cri.go:89] found id: ""
	I1217 20:40:05.975492  528764 logs.go:282] 0 containers: []
	W1217 20:40:05.975499  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:40:05.975507  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:40:05.975518  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:40:06.041819  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:40:06.041840  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:40:06.056861  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:40:06.056877  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:40:06.121776  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:40:06.113002   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.113953   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.115560   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.115989   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.117538   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:40:06.113002   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.113953   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.115560   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.115989   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:40:06.117538   17431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:40:06.121787  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:40:06.121799  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:40:06.189149  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:40:06.189168  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:40:08.726723  528764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:40:08.736543  528764 kubeadm.go:602] duration metric: took 4m2.922502769s to restartPrimaryControlPlane
	W1217 20:40:08.736595  528764 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 20:40:08.736673  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 20:40:09.144455  528764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:40:09.157270  528764 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:40:09.165045  528764 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:40:09.165097  528764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:40:09.172944  528764 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:40:09.172955  528764 kubeadm.go:158] found existing configuration files:
	
	I1217 20:40:09.173008  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 20:40:09.180768  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:40:09.180823  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:40:09.188593  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 20:40:09.196627  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:40:09.196696  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:40:09.204027  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 20:40:09.211590  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:40:09.211645  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:40:09.219300  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 20:40:09.227194  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:40:09.227262  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:40:09.234747  528764 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:40:09.272070  528764 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 20:40:09.272212  528764 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:40:09.341132  528764 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:40:09.341223  528764 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 20:40:09.341264  528764 kubeadm.go:319] OS: Linux
	I1217 20:40:09.341317  528764 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:40:09.341383  528764 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 20:40:09.341441  528764 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:40:09.341494  528764 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:40:09.341544  528764 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:40:09.341595  528764 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:40:09.341642  528764 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:40:09.341697  528764 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:40:09.341746  528764 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 20:40:09.410099  528764 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:40:09.410202  528764 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:40:09.410291  528764 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:40:09.420776  528764 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:40:09.424281  528764 out.go:252]   - Generating certificates and keys ...
	I1217 20:40:09.424384  528764 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:40:09.424470  528764 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:40:09.424574  528764 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 20:40:09.424647  528764 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 20:40:09.424730  528764 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 20:40:09.424800  528764 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 20:40:09.424875  528764 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 20:40:09.424947  528764 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 20:40:09.425042  528764 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 20:40:09.425124  528764 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 20:40:09.425164  528764 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 20:40:09.425224  528764 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:40:09.510914  528764 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:40:09.769116  528764 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:40:10.300117  528764 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:40:10.525653  528764 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:40:10.613609  528764 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:40:10.614221  528764 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:40:10.616799  528764 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 20:40:10.619993  528764 out.go:252]   - Booting up control plane ...
	I1217 20:40:10.620096  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:40:10.620217  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:40:10.620290  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:40:10.635322  528764 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:40:10.635439  528764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:40:10.644820  528764 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:40:10.645930  528764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:40:10.645984  528764 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:40:10.779996  528764 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:40:10.780110  528764 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:44:10.781176  528764 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001248714s
	I1217 20:44:10.781203  528764 kubeadm.go:319] 
	I1217 20:44:10.781260  528764 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 20:44:10.781303  528764 kubeadm.go:319] 	- The kubelet is not running
	I1217 20:44:10.781406  528764 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 20:44:10.781411  528764 kubeadm.go:319] 
	I1217 20:44:10.781555  528764 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 20:44:10.781602  528764 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 20:44:10.781633  528764 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 20:44:10.781637  528764 kubeadm.go:319] 
	I1217 20:44:10.786300  528764 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 20:44:10.786712  528764 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 20:44:10.786818  528764 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 20:44:10.787052  528764 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 20:44:10.787056  528764 kubeadm.go:319] 
	I1217 20:44:10.787124  528764 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 20:44:10.787237  528764 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001248714s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 20:44:10.787339  528764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 20:44:11.201167  528764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:44:11.214381  528764 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:44:11.214439  528764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:44:11.222598  528764 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:44:11.222610  528764 kubeadm.go:158] found existing configuration files:
	
	I1217 20:44:11.222661  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 20:44:11.230419  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:44:11.230478  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:44:11.238159  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 20:44:11.246406  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:44:11.246462  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:44:11.254307  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 20:44:11.262104  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:44:11.262159  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:44:11.270202  528764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 20:44:11.278439  528764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:44:11.278497  528764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:44:11.286143  528764 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:44:11.330597  528764 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 20:44:11.330648  528764 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:44:11.407432  528764 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:44:11.407494  528764 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 20:44:11.407526  528764 kubeadm.go:319] OS: Linux
	I1217 20:44:11.407568  528764 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:44:11.407631  528764 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 20:44:11.407675  528764 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:44:11.407720  528764 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:44:11.407764  528764 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:44:11.407809  528764 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:44:11.407851  528764 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:44:11.407896  528764 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:44:11.407938  528764 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 20:44:11.479750  528764 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:44:11.479854  528764 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:44:11.479945  528764 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:44:11.492072  528764 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:44:11.494989  528764 out.go:252]   - Generating certificates and keys ...
	I1217 20:44:11.495078  528764 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:44:11.495152  528764 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:44:11.495231  528764 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 20:44:11.495312  528764 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 20:44:11.495394  528764 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 20:44:11.495452  528764 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 20:44:11.495526  528764 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 20:44:11.495616  528764 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 20:44:11.495700  528764 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 20:44:11.495778  528764 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 20:44:11.495818  528764 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 20:44:11.495877  528764 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:44:11.718879  528764 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:44:11.913718  528764 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:44:12.104953  528764 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:44:12.214740  528764 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:44:13.078100  528764 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:44:13.078681  528764 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:44:13.081470  528764 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 20:44:13.086841  528764 out.go:252]   - Booting up control plane ...
	I1217 20:44:13.086964  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:44:13.087047  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:44:13.087115  528764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:44:13.101223  528764 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:44:13.101325  528764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:44:13.108618  528764 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:44:13.108874  528764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:44:13.109039  528764 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:44:13.243147  528764 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:44:13.243267  528764 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:48:13.243345  528764 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000238438s
	I1217 20:48:13.243376  528764 kubeadm.go:319] 
	I1217 20:48:13.243430  528764 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 20:48:13.243460  528764 kubeadm.go:319] 	- The kubelet is not running
	I1217 20:48:13.243558  528764 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 20:48:13.243562  528764 kubeadm.go:319] 
	I1217 20:48:13.243678  528764 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 20:48:13.243708  528764 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 20:48:13.243736  528764 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 20:48:13.243739  528764 kubeadm.go:319] 
	I1217 20:48:13.247539  528764 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 20:48:13.247985  528764 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 20:48:13.248095  528764 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 20:48:13.248338  528764 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 20:48:13.248343  528764 kubeadm.go:319] 
	I1217 20:48:13.248416  528764 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 20:48:13.248469  528764 kubeadm.go:403] duration metric: took 12m7.468824114s to StartCluster
	I1217 20:48:13.248499  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:48:13.248560  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:48:13.273652  528764 cri.go:89] found id: ""
	I1217 20:48:13.273665  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.273672  528764 logs.go:284] No container was found matching "kube-apiserver"
	I1217 20:48:13.273677  528764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:48:13.273743  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:48:13.299758  528764 cri.go:89] found id: ""
	I1217 20:48:13.299773  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.299780  528764 logs.go:284] No container was found matching "etcd"
	I1217 20:48:13.299787  528764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:48:13.299849  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:48:13.331514  528764 cri.go:89] found id: ""
	I1217 20:48:13.331527  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.331534  528764 logs.go:284] No container was found matching "coredns"
	I1217 20:48:13.331538  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:48:13.331632  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:48:13.361494  528764 cri.go:89] found id: ""
	I1217 20:48:13.361508  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.361515  528764 logs.go:284] No container was found matching "kube-scheduler"
	I1217 20:48:13.361520  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:48:13.361583  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:48:13.392361  528764 cri.go:89] found id: ""
	I1217 20:48:13.392374  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.392382  528764 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:48:13.392387  528764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:48:13.392445  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:48:13.420567  528764 cri.go:89] found id: ""
	I1217 20:48:13.420581  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.420589  528764 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 20:48:13.420594  528764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:48:13.420652  528764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:48:13.446072  528764 cri.go:89] found id: ""
	I1217 20:48:13.446086  528764 logs.go:282] 0 containers: []
	W1217 20:48:13.446093  528764 logs.go:284] No container was found matching "kindnet"
	I1217 20:48:13.446102  528764 logs.go:123] Gathering logs for kubelet ...
	I1217 20:48:13.446112  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:48:13.512293  528764 logs.go:123] Gathering logs for dmesg ...
	I1217 20:48:13.512314  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:48:13.527934  528764 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:48:13.527951  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:48:13.596728  528764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:48:13.587277   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.587931   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.589795   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.590404   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.592261   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 20:48:13.587277   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.587931   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.589795   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.590404   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:48:13.592261   21200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:48:13.596751  528764 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:48:13.596762  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:48:13.666834  528764 logs.go:123] Gathering logs for container status ...
	I1217 20:48:13.666852  528764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 20:48:13.697763  528764 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238438s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 20:48:13.697796  528764 out.go:285] * 
	W1217 20:48:13.697859  528764 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238438s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 20:48:13.697876  528764 out.go:285] * 
	W1217 20:48:13.700016  528764 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:48:13.704929  528764 out.go:203] 
	W1217 20:48:13.708733  528764 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000238438s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 20:48:13.708785  528764 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 20:48:13.708804  528764 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 20:48:13.713576  528764 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496553819Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496588913Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496641484Z" level=info msg="Create NRI interface"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496756307Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496765161Z" level=info msg="runtime interface created"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496787586Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496795537Z" level=info msg="runtime interface starting up..."
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496804792Z" level=info msg="starting plugins..."
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496818503Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496896764Z" level=info msg="No systemd watchdog enabled"
	Dec 17 20:36:04 functional-655452 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.415834383Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=58f6f0f1-488b-4240-a679-3e157f00d7e7 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.416590837Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=05b425cc-49a9-416d-8e00-62945047df36 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.417323538Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=a9a38e6d-b290-413f-a93f-cf194783972f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.417962945Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=bdf79a37-e5ac-441d-baa9-990efb2af86f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.418404377Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=f29ade00-2b87-48af-a8d1-af1f70d12fc1 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.418943992Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=aa01ccac-5dc1-42c2-9b96-b5307aedf908 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.419435131Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=3071c5cb-d2e8-40e4-bf26-10cfdb83c6ca name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.483168755Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=116885b2-e96e-48a5-8c7d-749c0bd3c872 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.484179432Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=a7b99d88-fbbf-4485-ad77-1f09bb11e283 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.484714555Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=1a3a48a9-47e1-4681-9a10-70d7c5e85de2 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.48529777Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=48ecbe50-05dc-4736-8a4c-23a7b8f0b752 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.485817657Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=13bf3d26-ab2e-4773-bb7e-3fc288ba3714 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.486350122Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=3ebf0c9f-0c46-4d67-8924-03dd39ad4399 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.486847969Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=e3deb8c8-e04b-4949-9c80-5a8e5a9b5bee name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:49:57.845618   22602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:49:57.846153   22602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:49:57.847998   22602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:49:57.848411   22602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:49:57.849929   22602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 19:02] hrtimer: interrupt took 71042327 ns
	[Dec17 20:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 20:12] overlayfs: idmapped layers are currently not supported
	[  +0.080288] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec17 20:17] overlayfs: idmapped layers are currently not supported
	[Dec17 20:18] overlayfs: idmapped layers are currently not supported
	[Dec17 20:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:49:57 up  3:32,  0 user,  load average: 0.08, 0.19, 0.49
	Linux functional-655452 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 20:49:55 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:49:56 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1098.
	Dec 17 20:49:56 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:49:56 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:49:56 functional-655452 kubelet[22495]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:49:56 functional-655452 kubelet[22495]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:49:56 functional-655452 kubelet[22495]: E1217 20:49:56.115868   22495 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:49:56 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:49:56 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:49:56 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1099.
	Dec 17 20:49:56 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:49:56 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:49:56 functional-655452 kubelet[22503]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:49:56 functional-655452 kubelet[22503]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:49:56 functional-655452 kubelet[22503]: E1217 20:49:56.873831   22503 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:49:56 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:49:56 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:49:57 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1100.
	Dec 17 20:49:57 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:49:57 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:49:57 functional-655452 kubelet[22549]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:49:57 functional-655452 kubelet[22549]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:49:57 functional-655452 kubelet[22549]: E1217 20:49:57.645165   22549 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:49:57 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:49:57 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452: exit status 2 (332.708954ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-655452" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (2.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (241.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1217 20:48:32.004457  488412 retry.go:31] will retry after 3.307799682s: Temporary Error: Get "http://10.111.59.156": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1217 20:48:45.312679  488412 retry.go:31] will retry after 2.87964885s: Temporary Error: Get "http://10.111.59.156": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1217 20:48:56.660930  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1217 20:48:58.194247  488412 retry.go:31] will retry after 6.542104376s: Temporary Error: Get "http://10.111.59.156": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1217 20:49:14.738661  488412 retry.go:31] will retry after 12.219125024s: Temporary Error: Get "http://10.111.59.156": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1217 20:49:36.958741  488412 retry.go:31] will retry after 9.183339632s: Temporary Error: Get "http://10.111.59.156": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1217 20:51:59.730721  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452: exit status 2 (304.56942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-655452" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-655452
helpers_test.go:244: (dbg) docker inspect functional-655452:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	        "Created": "2025-12-17T20:21:23.127111799Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 517339,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:21:23.205943598Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hosts",
	        "LogPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f-json.log",
	        "Name": "/functional-655452",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-655452:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-655452",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	                "LowerDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8-init/diff:/var/lib/docker/overlay2/c4759b344ecb109c83d66e4ef56c76903f1d1e597efb86ab2d2c7911b1130a8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-655452",
	                "Source": "/var/lib/docker/volumes/functional-655452/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-655452",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-655452",
	                "name.minikube.sigs.k8s.io": "functional-655452",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "77ae7dbcf69b3201be4ce65a88d235dbad078142318e01a5939425f9766fb924",
	            "SandboxKey": "/var/run/docker/netns/77ae7dbcf69b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-655452": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:c6:98:b5:58:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b68cd6e69ccdb81b0870dd976e2d5401ca696480842d2c469293121d59435cfb",
	                    "EndpointID": "82926bb4b08b5f66131c1026a8bc72a582e9cb8c6d97a278a494965923f47cb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-655452",
	                        "ce7a6a54d14f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452: exit status 2 (306.785335ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-655452 image load --daemon kicbase/echo-server:functional-655452 --alsologtostderr                                                             │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ image          │ functional-655452 image ls                                                                                                                                │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ image          │ functional-655452 image save kicbase/echo-server:functional-655452 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ image          │ functional-655452 image rm kicbase/echo-server:functional-655452 --alsologtostderr                                                                        │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ image          │ functional-655452 image ls                                                                                                                                │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ image          │ functional-655452 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ image          │ functional-655452 image ls                                                                                                                                │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ image          │ functional-655452 image save --daemon kicbase/echo-server:functional-655452 --alsologtostderr                                                             │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh            │ functional-655452 ssh sudo cat /etc/test/nested/copy/488412/hosts                                                                                         │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh            │ functional-655452 ssh sudo cat /etc/ssl/certs/488412.pem                                                                                                  │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh            │ functional-655452 ssh sudo cat /usr/share/ca-certificates/488412.pem                                                                                      │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh            │ functional-655452 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh            │ functional-655452 ssh sudo cat /etc/ssl/certs/4884122.pem                                                                                                 │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh            │ functional-655452 ssh sudo cat /usr/share/ca-certificates/4884122.pem                                                                                     │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh            │ functional-655452 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ image          │ functional-655452 image ls --format short --alsologtostderr                                                                                               │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ update-context │ functional-655452 update-context --alsologtostderr -v=2                                                                                                   │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh            │ functional-655452 ssh pgrep buildkitd                                                                                                                     │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ image          │ functional-655452 image build -t localhost/my-image:functional-655452 testdata/build --alsologtostderr                                                    │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ image          │ functional-655452 image ls                                                                                                                                │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ image          │ functional-655452 image ls --format yaml --alsologtostderr                                                                                                │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ image          │ functional-655452 image ls --format json --alsologtostderr                                                                                                │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ image          │ functional-655452 image ls --format table --alsologtostderr                                                                                               │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ update-context │ functional-655452 update-context --alsologtostderr -v=2                                                                                                   │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ update-context │ functional-655452 update-context --alsologtostderr -v=2                                                                                                   │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:50:14
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:50:14.416108  546005 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:50:14.416303  546005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:50:14.416337  546005 out.go:374] Setting ErrFile to fd 2...
	I1217 20:50:14.416356  546005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:50:14.416664  546005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:50:14.417124  546005 out.go:368] Setting JSON to false
	I1217 20:50:14.418092  546005 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12764,"bootTime":1765991851,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:50:14.418210  546005 start.go:143] virtualization:  
	I1217 20:50:14.421680  546005 out.go:179] * [functional-655452] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:50:14.424709  546005 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:50:14.424790  546005 notify.go:221] Checking for updates...
	I1217 20:50:14.430624  546005 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:50:14.433538  546005 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:50:14.436475  546005 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:50:14.439543  546005 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:50:14.442511  546005 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:50:14.445934  546005 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:50:14.446540  546005 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:50:14.472609  546005 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:50:14.472735  546005 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:50:14.533624  546005 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 20:50:14.524341367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:50:14.533733  546005 docker.go:319] overlay module found
	I1217 20:50:14.536881  546005 out.go:179] * Using the docker driver based on existing profile
	I1217 20:50:14.539750  546005 start.go:309] selected driver: docker
	I1217 20:50:14.539770  546005 start.go:927] validating driver "docker" against &{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:50:14.539870  546005 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:50:14.539971  546005 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:50:14.610390  546005 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 20:50:14.601163794 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:50:14.610824  546005 cni.go:84] Creating CNI manager for ""
	I1217 20:50:14.610891  546005 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:50:14.610937  546005 start.go:353] cluster config:
	{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:50:14.614124  546005 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.483168755Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=116885b2-e96e-48a5-8c7d-749c0bd3c872 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.484179432Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=a7b99d88-fbbf-4485-ad77-1f09bb11e283 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.484714555Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=1a3a48a9-47e1-4681-9a10-70d7c5e85de2 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.48529777Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=48ecbe50-05dc-4736-8a4c-23a7b8f0b752 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.485817657Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=13bf3d26-ab2e-4773-bb7e-3fc288ba3714 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.486350122Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=3ebf0c9f-0c46-4d67-8924-03dd39ad4399 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.486847969Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=e3deb8c8-e04b-4949-9c80-5a8e5a9b5bee name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:50:19 functional-655452 crio[10065]: time="2025-12-17T20:50:19.311123909Z" level=info msg="Checking image status: kicbase/echo-server:functional-655452" id=78b6a136-b248-42ab-bead-b49a54ecd6f7 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:50:19 functional-655452 crio[10065]: time="2025-12-17T20:50:19.311333971Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 17 20:50:19 functional-655452 crio[10065]: time="2025-12-17T20:50:19.311394362Z" level=info msg="Image kicbase/echo-server:functional-655452 not found" id=78b6a136-b248-42ab-bead-b49a54ecd6f7 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:50:19 functional-655452 crio[10065]: time="2025-12-17T20:50:19.311494162Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-655452 found" id=78b6a136-b248-42ab-bead-b49a54ecd6f7 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:50:19 functional-655452 crio[10065]: time="2025-12-17T20:50:19.338505341Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-655452" id=3bce5db3-740b-4033-8c8b-98b11e0e0354 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:50:19 functional-655452 crio[10065]: time="2025-12-17T20:50:19.338651887Z" level=info msg="Image docker.io/kicbase/echo-server:functional-655452 not found" id=3bce5db3-740b-4033-8c8b-98b11e0e0354 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:50:19 functional-655452 crio[10065]: time="2025-12-17T20:50:19.338691305Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-655452 found" id=3bce5db3-740b-4033-8c8b-98b11e0e0354 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:50:19 functional-655452 crio[10065]: time="2025-12-17T20:50:19.366516494Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-655452" id=529c5231-7d71-4355-820a-25cc7e74d04d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:50:19 functional-655452 crio[10065]: time="2025-12-17T20:50:19.366669513Z" level=info msg="Image localhost/kicbase/echo-server:functional-655452 not found" id=529c5231-7d71-4355-820a-25cc7e74d04d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:50:19 functional-655452 crio[10065]: time="2025-12-17T20:50:19.366709588Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-655452 found" id=529c5231-7d71-4355-820a-25cc7e74d04d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:50:22 functional-655452 crio[10065]: time="2025-12-17T20:50:22.374299677Z" level=info msg="Checking image status: kicbase/echo-server:functional-655452" id=9f78a78b-37c2-4877-ac46-0ceedc4a0747 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:50:22 functional-655452 crio[10065]: time="2025-12-17T20:50:22.374516205Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 17 20:50:22 functional-655452 crio[10065]: time="2025-12-17T20:50:22.374573806Z" level=info msg="Image kicbase/echo-server:functional-655452 not found" id=9f78a78b-37c2-4877-ac46-0ceedc4a0747 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:50:22 functional-655452 crio[10065]: time="2025-12-17T20:50:22.374650484Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-655452 found" id=9f78a78b-37c2-4877-ac46-0ceedc4a0747 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:50:22 functional-655452 crio[10065]: time="2025-12-17T20:50:22.405516955Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-655452" id=411b9f90-cf2c-401a-a386-f104ee2b6b78 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:50:22 functional-655452 crio[10065]: time="2025-12-17T20:50:22.405658947Z" level=info msg="Image docker.io/kicbase/echo-server:functional-655452 not found" id=411b9f90-cf2c-401a-a386-f104ee2b6b78 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:50:22 functional-655452 crio[10065]: time="2025-12-17T20:50:22.405699341Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-655452 found" id=411b9f90-cf2c-401a-a386-f104ee2b6b78 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:50:22 functional-655452 crio[10065]: time="2025-12-17T20:50:22.430512597Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-655452" id=2365bc93-acac-4040-859a-2aab529402be name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:52:23.594133   25460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:52:23.594650   25460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:52:23.596142   25460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:52:23.596633   25460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:52:23.598169   25460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 19:02] hrtimer: interrupt took 71042327 ns
	[Dec17 20:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 20:12] overlayfs: idmapped layers are currently not supported
	[  +0.080288] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec17 20:17] overlayfs: idmapped layers are currently not supported
	[Dec17 20:18] overlayfs: idmapped layers are currently not supported
	[Dec17 20:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:52:23 up  3:34,  0 user,  load average: 0.37, 0.48, 0.58
	Linux functional-655452 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 20:52:20 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:52:21 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1292.
	Dec 17 20:52:21 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:52:21 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:52:21 functional-655452 kubelet[25336]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:52:21 functional-655452 kubelet[25336]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:52:21 functional-655452 kubelet[25336]: E1217 20:52:21.612406   25336 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:52:21 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:52:21 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:52:22 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1293.
	Dec 17 20:52:22 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:52:22 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:52:22 functional-655452 kubelet[25341]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:52:22 functional-655452 kubelet[25341]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:52:22 functional-655452 kubelet[25341]: E1217 20:52:22.364471   25341 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:52:22 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:52:22 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:52:23 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1294.
	Dec 17 20:52:23 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:52:23 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:52:23 functional-655452 kubelet[25377]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:52:23 functional-655452 kubelet[25377]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:52:23 functional-655452 kubelet[25377]: E1217 20:52:23.130958   25377 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:52:23 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:52:23 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452: exit status 2 (315.775084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-655452" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (241.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (1.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-655452 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-655452 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (64.324875ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-655452 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-655452
helpers_test.go:244: (dbg) docker inspect functional-655452:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	        "Created": "2025-12-17T20:21:23.127111799Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 517339,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:21:23.205943598Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/hosts",
	        "LogPath": "/var/lib/docker/containers/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f/ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f-json.log",
	        "Name": "/functional-655452",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-655452:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-655452",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce7a6a54d14f425b4187bd0ca1b803527a4413b0e2019a0bca6ff0c4cc720b0f",
	                "LowerDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8-init/diff:/var/lib/docker/overlay2/c4759b344ecb109c83d66e4ef56c76903f1d1e597efb86ab2d2c7911b1130a8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b078ea641dafe3bb75ca3d83c43fd851f0f209e5f3a5a8b9ceab888e394031a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-655452",
	                "Source": "/var/lib/docker/volumes/functional-655452/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-655452",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-655452",
	                "name.minikube.sigs.k8s.io": "functional-655452",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "77ae7dbcf69b3201be4ce65a88d235dbad078142318e01a5939425f9766fb924",
	            "SandboxKey": "/var/run/docker/netns/77ae7dbcf69b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-655452": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:c6:98:b5:58:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b68cd6e69ccdb81b0870dd976e2d5401ca696480842d2c469293121d59435cfb",
	                    "EndpointID": "82926bb4b08b5f66131c1026a8bc72a582e9cb8c6d97a278a494965923f47cb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-655452",
	                        "ce7a6a54d14f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-655452 -n functional-655452: exit status 2 (302.196371ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-655452 service hello-node --url                                                                                                         │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:49 UTC │                     │
	│ ssh       │ functional-655452 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ mount     │ -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun878729531/001:/mount-9p --alsologtostderr -v=1              │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ ssh       │ functional-655452 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh       │ functional-655452 ssh -- ls -la /mount-9p                                                                                                          │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh       │ functional-655452 ssh cat /mount-9p/test-1766004604061879051                                                                                       │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh       │ functional-655452 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                   │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ ssh       │ functional-655452 ssh sudo umount -f /mount-9p                                                                                                     │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ mount     │ -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun191240204/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ ssh       │ functional-655452 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ ssh       │ functional-655452 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh       │ functional-655452 ssh -- ls -la /mount-9p                                                                                                          │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh       │ functional-655452 ssh sudo umount -f /mount-9p                                                                                                     │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ mount     │ -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun309516247/001:/mount1 --alsologtostderr -v=1                │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ mount     │ -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun309516247/001:/mount2 --alsologtostderr -v=1                │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ mount     │ -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun309516247/001:/mount3 --alsologtostderr -v=1                │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ ssh       │ functional-655452 ssh findmnt -T /mount1                                                                                                           │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ ssh       │ functional-655452 ssh findmnt -T /mount1                                                                                                           │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh       │ functional-655452 ssh findmnt -T /mount2                                                                                                           │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ ssh       │ functional-655452 ssh findmnt -T /mount3                                                                                                           │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │ 17 Dec 25 20:50 UTC │
	│ mount     │ -p functional-655452 --kill=true                                                                                                                   │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ start     │ -p functional-655452 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1        │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ start     │ -p functional-655452 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1        │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ start     │ -p functional-655452 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                  │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-655452 --alsologtostderr -v=1                                                                                     │ functional-655452 │ jenkins │ v1.37.0 │ 17 Dec 25 20:50 UTC │                     │
	└───────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:50:14
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:50:14.416108  546005 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:50:14.416303  546005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:50:14.416337  546005 out.go:374] Setting ErrFile to fd 2...
	I1217 20:50:14.416356  546005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:50:14.416664  546005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:50:14.417124  546005 out.go:368] Setting JSON to false
	I1217 20:50:14.418092  546005 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12764,"bootTime":1765991851,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:50:14.418210  546005 start.go:143] virtualization:  
	I1217 20:50:14.421680  546005 out.go:179] * [functional-655452] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:50:14.424709  546005 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:50:14.424790  546005 notify.go:221] Checking for updates...
	I1217 20:50:14.430624  546005 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:50:14.433538  546005 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:50:14.436475  546005 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:50:14.439543  546005 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:50:14.442511  546005 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:50:14.445934  546005 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:50:14.446540  546005 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:50:14.472609  546005 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:50:14.472735  546005 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:50:14.533624  546005 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 20:50:14.524341367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:50:14.533733  546005 docker.go:319] overlay module found
	I1217 20:50:14.536881  546005 out.go:179] * Using the docker driver based on existing profile
	I1217 20:50:14.539750  546005 start.go:309] selected driver: docker
	I1217 20:50:14.539770  546005 start.go:927] validating driver "docker" against &{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:50:14.539870  546005 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:50:14.539971  546005 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:50:14.610390  546005 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 20:50:14.601163794 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:50:14.610824  546005 cni.go:84] Creating CNI manager for ""
	I1217 20:50:14.610891  546005 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:50:14.610937  546005 start.go:353] cluster config:
	{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:50:14.614124  546005 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496553819Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496588913Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496641484Z" level=info msg="Create NRI interface"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496756307Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496765161Z" level=info msg="runtime interface created"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496787586Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496795537Z" level=info msg="runtime interface starting up..."
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496804792Z" level=info msg="starting plugins..."
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496818503Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 20:36:04 functional-655452 crio[10065]: time="2025-12-17T20:36:04.496896764Z" level=info msg="No systemd watchdog enabled"
	Dec 17 20:36:04 functional-655452 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.415834383Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=58f6f0f1-488b-4240-a679-3e157f00d7e7 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.416590837Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=05b425cc-49a9-416d-8e00-62945047df36 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.417323538Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=a9a38e6d-b290-413f-a93f-cf194783972f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.417962945Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=bdf79a37-e5ac-441d-baa9-990efb2af86f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.418404377Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=f29ade00-2b87-48af-a8d1-af1f70d12fc1 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.418943992Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=aa01ccac-5dc1-42c2-9b96-b5307aedf908 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:40:09 functional-655452 crio[10065]: time="2025-12-17T20:40:09.419435131Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=3071c5cb-d2e8-40e4-bf26-10cfdb83c6ca name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.483168755Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=116885b2-e96e-48a5-8c7d-749c0bd3c872 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.484179432Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=a7b99d88-fbbf-4485-ad77-1f09bb11e283 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.484714555Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=1a3a48a9-47e1-4681-9a10-70d7c5e85de2 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.48529777Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=48ecbe50-05dc-4736-8a4c-23a7b8f0b752 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.485817657Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=13bf3d26-ab2e-4773-bb7e-3fc288ba3714 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.486350122Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=3ebf0c9f-0c46-4d67-8924-03dd39ad4399 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:44:11 functional-655452 crio[10065]: time="2025-12-17T20:44:11.486847969Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=e3deb8c8-e04b-4949-9c80-5a8e5a9b5bee name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:50:17.337449   23612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:50:17.338016   23612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:50:17.339815   23612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:50:17.340261   23612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 20:50:17.341819   23612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 19:02] hrtimer: interrupt took 71042327 ns
	[Dec17 20:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 20:12] overlayfs: idmapped layers are currently not supported
	[  +0.080288] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec17 20:17] overlayfs: idmapped layers are currently not supported
	[Dec17 20:18] overlayfs: idmapped layers are currently not supported
	[Dec17 20:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:50:17 up  3:32,  0 user,  load average: 1.69, 0.54, 0.60
	Linux functional-655452 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 20:50:14 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:50:15 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1124.
	Dec 17 20:50:15 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:50:15 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:50:15 functional-655452 kubelet[23401]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:50:15 functional-655452 kubelet[23401]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:50:15 functional-655452 kubelet[23401]: E1217 20:50:15.623361   23401 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:50:15 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:50:15 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:50:16 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1125.
	Dec 17 20:50:16 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:50:16 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:50:16 functional-655452 kubelet[23505]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:50:16 functional-655452 kubelet[23505]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:50:16 functional-655452 kubelet[23505]: E1217 20:50:16.317649   23505 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:50:16 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:50:16 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 20:50:17 functional-655452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1126.
	Dec 17 20:50:17 functional-655452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:50:17 functional-655452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:50:17 functional-655452 kubelet[23557]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:50:17 functional-655452 kubelet[23557]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 20:50:17 functional-655452 kubelet[23557]: E1217 20:50:17.140395   23557 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 20:50:17 functional-655452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 20:50:17 functional-655452 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-655452 -n functional-655452: exit status 2 (305.42876ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-655452" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (1.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-655452 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-655452 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1217 20:48:21.441278  541807 out.go:360] Setting OutFile to fd 1 ...
I1217 20:48:21.441954  541807 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:48:21.441977  541807 out.go:374] Setting ErrFile to fd 2...
I1217 20:48:21.441997  541807 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:48:21.442500  541807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
I1217 20:48:21.443671  541807 mustload.go:66] Loading cluster: functional-655452
I1217 20:48:21.444803  541807 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 20:48:21.446147  541807 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
I1217 20:48:21.481466  541807 host.go:66] Checking if "functional-655452" exists ...
I1217 20:48:21.481783  541807 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1217 20:48:21.634378  541807 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 20:48:21.619624367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1217 20:48:21.634500  541807 api_server.go:166] Checking apiserver status ...
I1217 20:48:21.634929  541807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1217 20:48:21.635013  541807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
I1217 20:48:21.684119  541807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
W1217 20:48:21.791287  541807 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1217 20:48:21.794618  541807 out.go:179] * The control-plane node functional-655452 apiserver is not running: (state=Stopped)
I1217 20:48:21.797881  541807 out.go:179]   To start a cluster, run: "minikube start -p functional-655452"

                                                
                                                
stdout: * The control-plane node functional-655452 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-655452"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-655452 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 541808: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-655452 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-655452 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-655452 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-655452 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-655452 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-655452 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-655452 apply -f testdata/testsvc.yaml: exit status 1 (78.683031ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-655452 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (94.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.111.59.156": Temporary Error: Get "http://10.111.59.156": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-655452 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-655452 get svc nginx-svc: exit status 1 (61.660353ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-655452 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (94.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-655452 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-655452 create deployment hello-node --image kicbase/echo-server: exit status 1 (57.577454ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-655452 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-655452 service list: exit status 103 (251.901154ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-655452 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-655452"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-655452 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-655452 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-655452\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-655452 service list -o json: exit status 103 (256.416657ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-655452 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-655452"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-655452 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-655452 service --namespace=default --https --url hello-node: exit status 103 (256.388504ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-655452 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-655452"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-655452 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-655452 service hello-node --url --format={{.IP}}: exit status 103 (250.05832ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-655452 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-655452"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-655452 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-655452 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-655452\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-655452 service hello-node --url: exit status 103 (473.010634ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-655452 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-655452"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-655452 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-655452 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-655452"
functional_test.go:1579: failed to parse "* The control-plane node functional-655452 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-655452\"": parse "* The control-plane node functional-655452 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-655452\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (2.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun878729531/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1766004604061879051" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun878729531/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1766004604061879051" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun878729531/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1766004604061879051" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun878729531/001/test-1766004604061879051
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-655452 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (338.65552ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 20:50:04.400849  488412 retry.go:31] will retry after 644.06769ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 20:50 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 20:50 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 20:50 test-1766004604061879051
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh cat /mount-9p/test-1766004604061879051
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-655452 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-655452 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (60.194785ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-655452 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-655452 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (282.052257ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=41627)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 17 20:50 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 17 20:50 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 17 20:50 test-1766004604061879051
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-655452 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun878729531/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun878729531/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun878729531/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:41627
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun878729531/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun878729531/001:/mount-9p --alsologtostderr -v=1] stderr:
I1217 20:50:04.122651  544051 out.go:360] Setting OutFile to fd 1 ...
I1217 20:50:04.122824  544051 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:50:04.122836  544051 out.go:374] Setting ErrFile to fd 2...
I1217 20:50:04.122842  544051 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:50:04.123363  544051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
I1217 20:50:04.124045  544051 mustload.go:66] Loading cluster: functional-655452
I1217 20:50:04.124403  544051 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 20:50:04.124894  544051 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
I1217 20:50:04.144210  544051 host.go:66] Checking if "functional-655452" exists ...
I1217 20:50:04.144652  544051 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1217 20:50:04.239501  544051 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 20:50:04.229680315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1217 20:50:04.239737  544051 cli_runner.go:164] Run: docker network inspect functional-655452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1217 20:50:04.272991  544051 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun878729531/001 into VM as /mount-9p ...
I1217 20:50:04.276330  544051 out.go:179]   - Mount type:   9p
I1217 20:50:04.280275  544051 out.go:179]   - User ID:      docker
I1217 20:50:04.283656  544051 out.go:179]   - Group ID:     docker
I1217 20:50:04.286585  544051 out.go:179]   - Version:      9p2000.L
I1217 20:50:04.289330  544051 out.go:179]   - Message Size: 262144
I1217 20:50:04.292805  544051 out.go:179]   - Options:      map[]
I1217 20:50:04.295619  544051 out.go:179]   - Bind Address: 192.168.49.1:41627
I1217 20:50:04.298758  544051 out.go:179] * Userspace file server: 
I1217 20:50:04.298924  544051 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1217 20:50:04.299056  544051 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
I1217 20:50:04.323154  544051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
I1217 20:50:04.426640  544051 mount.go:180] unmount for /mount-9p ran successfully
I1217 20:50:04.426670  544051 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1217 20:50:04.435352  544051 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=41627,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1217 20:50:04.446455  544051 main.go:127] stdlog: ufs.go:141 connected
I1217 20:50:04.446630  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tversion tag 65535 msize 262144 version '9P2000.L'
I1217 20:50:04.446678  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rversion tag 65535 msize 262144 version '9P2000'
I1217 20:50:04.446919  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1217 20:50:04.446987  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rattach tag 0 aqid (ed6f3d 2e13bc98 'd')
I1217 20:50:04.447267  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tstat tag 0 fid 0
I1217 20:50:04.447322  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6f3d 2e13bc98 'd') m d775 at 0 mt 1766004604 l 4096 t 0 d 0 ext )
I1217 20:50:04.449224  544051 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/.mount-process: {Name:mkffd9955ec8c2fb5030a827e4fbbd8c97639b07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 20:50:04.449433  544051 mount.go:105] mount successful: ""
I1217 20:50:04.452840  544051 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun878729531/001 to /mount-9p
I1217 20:50:04.455663  544051 out.go:203] 
I1217 20:50:04.458477  544051 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1217 20:50:05.596282  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tstat tag 0 fid 0
I1217 20:50:05.596358  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6f3d 2e13bc98 'd') m d775 at 0 mt 1766004604 l 4096 t 0 d 0 ext )
I1217 20:50:05.596743  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Twalk tag 0 fid 0 newfid 1 
I1217 20:50:05.596779  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rwalk tag 0 
I1217 20:50:05.596904  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Topen tag 0 fid 1 mode 0
I1217 20:50:05.596955  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Ropen tag 0 qid (ed6f3d 2e13bc98 'd') iounit 0
I1217 20:50:05.597101  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tstat tag 0 fid 0
I1217 20:50:05.597162  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6f3d 2e13bc98 'd') m d775 at 0 mt 1766004604 l 4096 t 0 d 0 ext )
I1217 20:50:05.597310  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tread tag 0 fid 1 offset 0 count 262120
I1217 20:50:05.597439  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rread tag 0 count 258
I1217 20:50:05.597602  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tread tag 0 fid 1 offset 258 count 261862
I1217 20:50:05.597631  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rread tag 0 count 0
I1217 20:50:05.597766  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tread tag 0 fid 1 offset 258 count 262120
I1217 20:50:05.597793  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rread tag 0 count 0
I1217 20:50:05.597923  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1217 20:50:05.597955  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rwalk tag 0 (ed6f3e 2e13bc98 '') 
I1217 20:50:05.598080  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tstat tag 0 fid 2
I1217 20:50:05.598124  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6f3e 2e13bc98 '') m 644 at 0 mt 1766004604 l 24 t 0 d 0 ext )
I1217 20:50:05.598251  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tstat tag 0 fid 2
I1217 20:50:05.598285  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6f3e 2e13bc98 '') m 644 at 0 mt 1766004604 l 24 t 0 d 0 ext )
I1217 20:50:05.598422  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tclunk tag 0 fid 2
I1217 20:50:05.598451  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rclunk tag 0
I1217 20:50:05.598580  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Twalk tag 0 fid 0 newfid 2 0:'test-1766004604061879051' 
I1217 20:50:05.598615  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rwalk tag 0 (ed6f40 2e13bc98 '') 
I1217 20:50:05.598744  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tstat tag 0 fid 2
I1217 20:50:05.598774  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rstat tag 0 st ('test-1766004604061879051' 'jenkins' 'jenkins' '' q (ed6f40 2e13bc98 '') m 644 at 0 mt 1766004604 l 24 t 0 d 0 ext )
I1217 20:50:05.598899  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tstat tag 0 fid 2
I1217 20:50:05.598931  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rstat tag 0 st ('test-1766004604061879051' 'jenkins' 'jenkins' '' q (ed6f40 2e13bc98 '') m 644 at 0 mt 1766004604 l 24 t 0 d 0 ext )
I1217 20:50:05.599069  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tclunk tag 0 fid 2
I1217 20:50:05.599092  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rclunk tag 0
I1217 20:50:05.599221  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1217 20:50:05.599265  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rwalk tag 0 (ed6f3f 2e13bc98 '') 
I1217 20:50:05.599373  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tstat tag 0 fid 2
I1217 20:50:05.599413  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6f3f 2e13bc98 '') m 644 at 0 mt 1766004604 l 24 t 0 d 0 ext )
I1217 20:50:05.599568  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tstat tag 0 fid 2
I1217 20:50:05.599622  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6f3f 2e13bc98 '') m 644 at 0 mt 1766004604 l 24 t 0 d 0 ext )
I1217 20:50:05.599748  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tclunk tag 0 fid 2
I1217 20:50:05.599772  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rclunk tag 0
I1217 20:50:05.599896  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tread tag 0 fid 1 offset 258 count 262120
I1217 20:50:05.599923  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rread tag 0 count 0
I1217 20:50:05.600055  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tclunk tag 0 fid 1
I1217 20:50:05.600084  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rclunk tag 0
I1217 20:50:05.860318  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Twalk tag 0 fid 0 newfid 1 0:'test-1766004604061879051' 
I1217 20:50:05.860396  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rwalk tag 0 (ed6f40 2e13bc98 '') 
I1217 20:50:05.860534  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tstat tag 0 fid 1
I1217 20:50:05.860579  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rstat tag 0 st ('test-1766004604061879051' 'jenkins' 'jenkins' '' q (ed6f40 2e13bc98 '') m 644 at 0 mt 1766004604 l 24 t 0 d 0 ext )
I1217 20:50:05.860697  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Twalk tag 0 fid 1 newfid 2 
I1217 20:50:05.860731  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rwalk tag 0 
I1217 20:50:05.860818  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Topen tag 0 fid 2 mode 0
I1217 20:50:05.860892  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Ropen tag 0 qid (ed6f40 2e13bc98 '') iounit 0
I1217 20:50:05.861008  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tstat tag 0 fid 1
I1217 20:50:05.861047  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rstat tag 0 st ('test-1766004604061879051' 'jenkins' 'jenkins' '' q (ed6f40 2e13bc98 '') m 644 at 0 mt 1766004604 l 24 t 0 d 0 ext )
I1217 20:50:05.861183  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tread tag 0 fid 2 offset 0 count 262120
I1217 20:50:05.861232  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rread tag 0 count 24
I1217 20:50:05.861355  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tread tag 0 fid 2 offset 24 count 262120
I1217 20:50:05.861392  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rread tag 0 count 0
I1217 20:50:05.861517  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tread tag 0 fid 2 offset 24 count 262120
I1217 20:50:05.861551  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rread tag 0 count 0
I1217 20:50:05.861682  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tclunk tag 0 fid 2
I1217 20:50:05.861713  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rclunk tag 0
I1217 20:50:05.861827  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tclunk tag 0 fid 1
I1217 20:50:05.861857  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rclunk tag 0
I1217 20:50:06.206625  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tstat tag 0 fid 0
I1217 20:50:06.206699  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6f3d 2e13bc98 'd') m d775 at 0 mt 1766004604 l 4096 t 0 d 0 ext )
I1217 20:50:06.207078  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Twalk tag 0 fid 0 newfid 1 
I1217 20:50:06.207118  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rwalk tag 0 
I1217 20:50:06.207260  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Topen tag 0 fid 1 mode 0
I1217 20:50:06.207313  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Ropen tag 0 qid (ed6f3d 2e13bc98 'd') iounit 0
I1217 20:50:06.207442  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tstat tag 0 fid 0
I1217 20:50:06.207483  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6f3d 2e13bc98 'd') m d775 at 0 mt 1766004604 l 4096 t 0 d 0 ext )
I1217 20:50:06.207680  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tread tag 0 fid 1 offset 0 count 262120
I1217 20:50:06.207790  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rread tag 0 count 258
I1217 20:50:06.207934  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tread tag 0 fid 1 offset 258 count 261862
I1217 20:50:06.207962  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rread tag 0 count 0
I1217 20:50:06.208137  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tread tag 0 fid 1 offset 258 count 262120
I1217 20:50:06.208169  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rread tag 0 count 0
I1217 20:50:06.208306  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1217 20:50:06.208348  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rwalk tag 0 (ed6f3e 2e13bc98 '') 
I1217 20:50:06.208474  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tstat tag 0 fid 2
I1217 20:50:06.208507  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6f3e 2e13bc98 '') m 644 at 0 mt 1766004604 l 24 t 0 d 0 ext )
I1217 20:50:06.208639  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tstat tag 0 fid 2
I1217 20:50:06.208687  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6f3e 2e13bc98 '') m 644 at 0 mt 1766004604 l 24 t 0 d 0 ext )
I1217 20:50:06.208818  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tclunk tag 0 fid 2
I1217 20:50:06.208840  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rclunk tag 0
I1217 20:50:06.208979  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Twalk tag 0 fid 0 newfid 2 0:'test-1766004604061879051' 
I1217 20:50:06.209011  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rwalk tag 0 (ed6f40 2e13bc98 '') 
I1217 20:50:06.209143  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tstat tag 0 fid 2
I1217 20:50:06.209176  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rstat tag 0 st ('test-1766004604061879051' 'jenkins' 'jenkins' '' q (ed6f40 2e13bc98 '') m 644 at 0 mt 1766004604 l 24 t 0 d 0 ext )
I1217 20:50:06.209299  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tstat tag 0 fid 2
I1217 20:50:06.209340  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rstat tag 0 st ('test-1766004604061879051' 'jenkins' 'jenkins' '' q (ed6f40 2e13bc98 '') m 644 at 0 mt 1766004604 l 24 t 0 d 0 ext )
I1217 20:50:06.209458  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tclunk tag 0 fid 2
I1217 20:50:06.209485  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rclunk tag 0
I1217 20:50:06.209622  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1217 20:50:06.209651  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rwalk tag 0 (ed6f3f 2e13bc98 '') 
I1217 20:50:06.209776  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tstat tag 0 fid 2
I1217 20:50:06.209805  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6f3f 2e13bc98 '') m 644 at 0 mt 1766004604 l 24 t 0 d 0 ext )
I1217 20:50:06.209954  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tstat tag 0 fid 2
I1217 20:50:06.209999  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6f3f 2e13bc98 '') m 644 at 0 mt 1766004604 l 24 t 0 d 0 ext )
I1217 20:50:06.210128  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tclunk tag 0 fid 2
I1217 20:50:06.210152  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rclunk tag 0
I1217 20:50:06.210281  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tread tag 0 fid 1 offset 258 count 262120
I1217 20:50:06.210305  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rread tag 0 count 0
I1217 20:50:06.210449  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tclunk tag 0 fid 1
I1217 20:50:06.210480  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rclunk tag 0
I1217 20:50:06.211739  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1217 20:50:06.211843  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rerror tag 0 ename 'file not found' ecode 0
I1217 20:50:06.469151  544051 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:50380 Tclunk tag 0 fid 0
I1217 20:50:06.469203  544051 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:50380 Rclunk tag 0
I1217 20:50:06.470368  544051 main.go:127] stdlog: ufs.go:147 disconnected
I1217 20:50:06.492672  544051 out.go:179] * Unmounting /mount-9p ...
I1217 20:50:06.495712  544051 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1217 20:50:06.502970  544051 mount.go:180] unmount for /mount-9p ran successfully
I1217 20:50:06.503074  544051 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/.mount-process: {Name:mkffd9955ec8c2fb5030a827e4fbbd8c97639b07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 20:50:06.506135  544051 out.go:203] 
W1217 20:50:06.509155  544051 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1217 20:50:06.512001  544051 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (2.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (432.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-148567 stop --alsologtostderr -v 5: (27.677279127s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 start --wait true --alsologtostderr -v 5
E1217 20:58:21.928393  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:58:33.924232  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:58:49.632400  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:58:56.663748  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 21:00:30.851762  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 21:03:21.928066  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-148567 start --wait true --alsologtostderr -v 5: exit status 80 (6m40.768175883s)

                                                
                                                
-- stdout --
	* [ha-148567] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-148567" primary control-plane node in "ha-148567" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	* Enabled addons: 
	
	* Starting "ha-148567-m02" control-plane node in "ha-148567" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-148567-m03" control-plane node in "ha-148567" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	* Starting "ha-148567-m04" worker node in "ha-148567" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	* Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	  - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:57:11.358859  568189 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:57:11.359079  568189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:57:11.359113  568189 out.go:374] Setting ErrFile to fd 2...
	I1217 20:57:11.359134  568189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:57:11.359399  568189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:57:11.359857  568189 out.go:368] Setting JSON to false
	I1217 20:57:11.360732  568189 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13181,"bootTime":1765991851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:57:11.360834  568189 start.go:143] virtualization:  
	I1217 20:57:11.366162  568189 out.go:179] * [ha-148567] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:57:11.369165  568189 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:57:11.369340  568189 notify.go:221] Checking for updates...
	I1217 20:57:11.372773  568189 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:57:11.376038  568189 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:57:11.378993  568189 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:57:11.381848  568189 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:57:11.384979  568189 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:57:11.388367  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:57:11.388514  568189 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:57:11.413210  568189 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:57:11.413329  568189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:57:11.470988  568189 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-17 20:57:11.461612355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:57:11.471099  568189 docker.go:319] overlay module found
	I1217 20:57:11.474237  568189 out.go:179] * Using the docker driver based on existing profile
	I1217 20:57:11.477144  568189 start.go:309] selected driver: docker
	I1217 20:57:11.477166  568189 start.go:927] validating driver "docker" against &{Name:ha-148567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:57:11.477308  568189 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:57:11.477418  568189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:57:11.541431  568189 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-17 20:57:11.532691865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:57:11.541848  568189 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:57:11.541879  568189 cni.go:84] Creating CNI manager for ""
	I1217 20:57:11.541937  568189 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1217 20:57:11.541988  568189 start.go:353] cluster config:
	{Name:ha-148567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:57:11.546865  568189 out.go:179] * Starting "ha-148567" primary control-plane node in "ha-148567" cluster
	I1217 20:57:11.549690  568189 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:57:11.552597  568189 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:57:11.555352  568189 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:57:11.555402  568189 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1217 20:57:11.555416  568189 cache.go:65] Caching tarball of preloaded images
	I1217 20:57:11.555437  568189 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:57:11.555506  568189 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:57:11.555517  568189 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:57:11.555734  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:57:11.574595  568189 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:57:11.574619  568189 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:57:11.574640  568189 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:57:11.574675  568189 start.go:360] acquireMachinesLock for ha-148567: {Name:mkeea083db7bee665ba841ae2b673f302d3ac8a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:57:11.574737  568189 start.go:364] duration metric: took 37.949µs to acquireMachinesLock for "ha-148567"
	I1217 20:57:11.574761  568189 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:57:11.574767  568189 fix.go:54] fixHost starting: 
	I1217 20:57:11.575046  568189 cli_runner.go:164] Run: docker container inspect ha-148567 --format={{.State.Status}}
	I1217 20:57:11.592879  568189 fix.go:112] recreateIfNeeded on ha-148567: state=Stopped err=<nil>
	W1217 20:57:11.592909  568189 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:57:11.596175  568189 out.go:252] * Restarting existing docker container for "ha-148567" ...
	I1217 20:57:11.596256  568189 cli_runner.go:164] Run: docker start ha-148567
	I1217 20:57:11.847065  568189 cli_runner.go:164] Run: docker container inspect ha-148567 --format={{.State.Status}}
	I1217 20:57:11.870185  568189 kic.go:430] container "ha-148567" state is running.
	I1217 20:57:11.870824  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567
	I1217 20:57:11.897361  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:57:11.897594  568189 machine.go:94] provisionDockerMachine start ...
	I1217 20:57:11.897659  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:11.920598  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:11.920937  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1217 20:57:11.920945  568189 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:57:11.923893  568189 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47488->127.0.0.1:33208: read: connection reset by peer
	I1217 20:57:15.067633  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567
	
	I1217 20:57:15.067656  568189 ubuntu.go:182] provisioning hostname "ha-148567"
	I1217 20:57:15.067737  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:15.086692  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:15.087056  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1217 20:57:15.087069  568189 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-148567 && echo "ha-148567" | sudo tee /etc/hostname
	I1217 20:57:15.229459  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567
	
	I1217 20:57:15.229547  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:15.248113  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:15.248429  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1217 20:57:15.248448  568189 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-148567' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-148567/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-148567' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:57:15.380233  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:57:15.380256  568189 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:57:15.380318  568189 ubuntu.go:190] setting up certificates
	I1217 20:57:15.380340  568189 provision.go:84] configureAuth start
	I1217 20:57:15.380427  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567
	I1217 20:57:15.398346  568189 provision.go:143] copyHostCerts
	I1217 20:57:15.398396  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:57:15.398436  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:57:15.398443  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:57:15.398519  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:57:15.398610  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:57:15.398628  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:57:15.398632  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:57:15.398658  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:57:15.398706  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:57:15.398722  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:57:15.398725  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:57:15.398748  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:57:15.398801  568189 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.ha-148567 san=[127.0.0.1 192.168.49.2 ha-148567 localhost minikube]
	I1217 20:57:16.169383  568189 provision.go:177] copyRemoteCerts
	I1217 20:57:16.169461  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:57:16.169502  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:16.187039  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567/id_rsa Username:docker}
	I1217 20:57:16.287499  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 20:57:16.287563  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:57:16.305548  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 20:57:16.305623  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1217 20:57:16.324256  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 20:57:16.324318  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:57:16.342494  568189 provision.go:87] duration metric: took 962.127276ms to configureAuth
	I1217 20:57:16.342522  568189 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:57:16.342771  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:57:16.342894  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:16.360548  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:16.360872  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1217 20:57:16.360886  568189 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:57:16.731877  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:57:16.731907  568189 machine.go:97] duration metric: took 4.834303602s to provisionDockerMachine
	I1217 20:57:16.731920  568189 start.go:293] postStartSetup for "ha-148567" (driver="docker")
	I1217 20:57:16.731930  568189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:57:16.732002  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:57:16.732081  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:16.754210  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567/id_rsa Username:docker}
	I1217 20:57:16.847793  568189 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:57:16.851353  568189 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:57:16.851380  568189 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:57:16.851393  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:57:16.851448  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:57:16.851530  568189 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:57:16.851537  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /etc/ssl/certs/4884122.pem
	I1217 20:57:16.851668  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:57:16.859497  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:57:16.877490  568189 start.go:296] duration metric: took 145.555245ms for postStartSetup
	I1217 20:57:16.877573  568189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:57:16.877619  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:16.895083  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567/id_rsa Username:docker}
	I1217 20:57:16.988718  568189 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:57:16.993912  568189 fix.go:56] duration metric: took 5.419138386s for fixHost
	I1217 20:57:16.993941  568189 start.go:83] releasing machines lock for "ha-148567", held for 5.419189965s
	I1217 20:57:16.994013  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567
	I1217 20:57:17.015130  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:57:17.015192  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:57:17.015202  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:57:17.015243  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:57:17.015276  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:57:17.015305  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:57:17.015359  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:57:17.015397  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:57:17.015413  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:57:17.015426  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:17.015449  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:57:17.015509  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:17.032525  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567/id_rsa Username:docker}
	I1217 20:57:17.141021  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:57:17.158347  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:57:17.175988  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:57:17.182704  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:57:17.190121  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:57:17.197484  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:57:17.201345  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:57:17.201430  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:57:17.242261  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:57:17.249871  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:17.257360  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:57:17.265311  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:17.269162  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:17.269230  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:17.310484  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:57:17.317908  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:57:17.325039  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:57:17.332443  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:57:17.336120  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:57:17.336229  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:57:17.377375  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:57:17.384997  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:57:17.388508  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 20:57:17.392154  568189 ssh_runner.go:195] Run: cat /version.json
	I1217 20:57:17.392265  568189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:57:17.396842  568189 ssh_runner.go:195] Run: systemctl --version
	I1217 20:57:17.490250  568189 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:57:17.526620  568189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:57:17.531388  568189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:57:17.531464  568189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:57:17.539341  568189 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:57:17.539367  568189 start.go:496] detecting cgroup driver to use...
	I1217 20:57:17.539398  568189 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:57:17.539448  568189 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:57:17.554515  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:57:17.567414  568189 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:57:17.567477  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:57:17.582837  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:57:17.596146  568189 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:57:17.711761  568189 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:57:17.824951  568189 docker.go:234] disabling docker service ...
	I1217 20:57:17.825056  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:57:17.839370  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:57:17.852221  568189 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:57:17.978299  568189 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:57:18.106183  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:57:18.119265  568189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:57:18.135218  568189 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:57:18.135286  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:18.144824  568189 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:57:18.144911  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:18.153531  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:18.162007  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:18.170781  568189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:57:18.178861  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:18.188770  568189 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:18.197027  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:18.205801  568189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:57:18.213338  568189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:57:18.220373  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:57:18.339982  568189 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:57:18.523093  568189 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:57:18.523169  568189 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:57:18.526796  568189 start.go:564] Will wait 60s for crictl version
	I1217 20:57:18.526868  568189 ssh_runner.go:195] Run: which crictl
	I1217 20:57:18.530299  568189 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:57:18.553630  568189 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:57:18.553755  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:57:18.582651  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:57:18.616862  568189 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 20:57:18.619814  568189 cli_runner.go:164] Run: docker network inspect ha-148567 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:57:18.635997  568189 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:57:18.639815  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:57:18.649441  568189 kubeadm.go:884] updating cluster {Name:ha-148567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:57:18.649590  568189 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:57:18.649659  568189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:57:18.684542  568189 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:57:18.684566  568189 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:57:18.684622  568189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:57:18.710185  568189 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:57:18.710209  568189 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:57:18.710218  568189 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1217 20:57:18.710314  568189 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-148567 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:57:18.710393  568189 ssh_runner.go:195] Run: crio config
	I1217 20:57:18.788945  568189 cni.go:84] Creating CNI manager for ""
	I1217 20:57:18.788969  568189 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1217 20:57:18.788980  568189 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:57:18.789006  568189 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-148567 NodeName:ha-148567 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:57:18.789146  568189 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-148567"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:57:18.789173  568189 kube-vip.go:115] generating kube-vip config ...
	I1217 20:57:18.789228  568189 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 20:57:18.801220  568189 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:57:18.801319  568189 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 20:57:18.801387  568189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:57:18.809265  568189 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:57:18.809341  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1217 20:57:18.816975  568189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1217 20:57:18.830189  568189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:57:18.843133  568189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1217 20:57:18.856384  568189 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 20:57:18.870226  568189 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 20:57:18.873999  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:57:18.883854  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:57:18.997472  568189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:57:19.014260  568189 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567 for IP: 192.168.49.2
	I1217 20:57:19.014282  568189 certs.go:195] generating shared ca certs ...
	I1217 20:57:19.014306  568189 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:57:19.014456  568189 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:57:19.014513  568189 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:57:19.014526  568189 certs.go:257] generating profile certs ...
	I1217 20:57:19.014605  568189 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key
	I1217 20:57:19.014640  568189 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key.235ee9c5
	I1217 20:57:19.014654  568189 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt.235ee9c5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1217 20:57:19.118946  568189 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt.235ee9c5 ...
	I1217 20:57:19.118983  568189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt.235ee9c5: {Name:mk1086942903d0f4fe5882a203e756f5bb8d0e75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:57:19.119164  568189 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key.235ee9c5 ...
	I1217 20:57:19.119181  568189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key.235ee9c5: {Name:mk80ca03d9af9f78d1f49f30dce3d5755dc5ecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:57:19.119259  568189 certs.go:382] copying /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt.235ee9c5 -> /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt
	I1217 20:57:19.119408  568189 certs.go:386] copying /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key.235ee9c5 -> /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key
	I1217 20:57:19.119551  568189 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key
	I1217 20:57:19.119572  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 20:57:19.120309  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 20:57:19.120337  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 20:57:19.120353  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 20:57:19.120372  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 20:57:19.120396  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 20:57:19.120412  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 20:57:19.120422  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 20:57:19.120480  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:57:19.120520  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:57:19.120532  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:57:19.120558  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:57:19.120587  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:57:19.120618  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:57:19.120667  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:57:19.120705  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:19.120722  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:57:19.120734  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:57:19.121259  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:57:19.145342  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:57:19.172402  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:57:19.199952  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:57:19.221869  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 20:57:19.249229  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:57:19.272832  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:57:19.291834  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:57:19.311373  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:57:19.330971  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:57:19.351692  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:57:19.371686  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:57:19.386168  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:57:19.392617  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:57:19.400115  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:57:19.407811  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:57:19.411925  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:57:19.411990  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:57:19.458050  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:57:19.465417  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:19.472749  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:57:19.480441  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:19.484121  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:19.484184  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:19.525126  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:57:19.532547  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:57:19.539760  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:57:19.547227  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:57:19.551344  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:57:19.551429  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:57:19.592800  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:57:19.600200  568189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:57:19.604024  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:57:19.651875  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:57:19.709469  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:57:19.756552  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:57:19.821907  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:57:19.909301  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:57:19.984018  568189 kubeadm.go:401] StartCluster: {Name:ha-148567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:57:19.984266  568189 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:57:19.984364  568189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:57:20.031672  568189 cri.go:89] found id: "023b1c530d5224ef13b091e8f631aeb894024192e8f5534cf29c773714cf0197"
	I1217 20:57:20.031748  568189 cri.go:89] found id: "7b48eea7424a1e799bb5102aad672e4089e73d5c20382c2df99a7acabddf99d2"
	I1217 20:57:20.031769  568189 cri.go:89] found id: "055c04d40b9a0b3de2fc113e6e93106a29a67f711d7609c5bdc735d261688c9e"
	I1217 20:57:20.031790  568189 cri.go:89] found id: "4f2a8a504377b01cbe43d291e9fa7cd514647d2cf31a4b90042b71653d4272df"
	I1217 20:57:20.031827  568189 cri.go:89] found id: "0273f065d6acfc2f5b1353496b1c10bb1409bb5cd6154db0859cb71f3d44d9a6"
	I1217 20:57:20.031852  568189 cri.go:89] found id: ""
	I1217 20:57:20.031944  568189 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 20:57:20.059831  568189 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:57:20Z" level=error msg="open /run/runc: no such file or directory"
	I1217 20:57:20.059961  568189 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:57:20.073057  568189 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 20:57:20.073134  568189 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 20:57:20.073239  568189 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 20:57:20.082317  568189 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:57:20.082916  568189 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-148567" does not appear in /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:57:20.083118  568189 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-485134/kubeconfig needs updating (will repair): [kubeconfig missing "ha-148567" cluster setting kubeconfig missing "ha-148567" context setting]
	I1217 20:57:20.083494  568189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/kubeconfig: {Name:mkc7214551fa855d6d4575d627c3ecd54291b351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:57:20.084509  568189 kapi.go:59] client config for ha-148567: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:57:20.085228  568189 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 20:57:20.085297  568189 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 20:57:20.085375  568189 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 20:57:20.085406  568189 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 20:57:20.085443  568189 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 20:57:20.085470  568189 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 20:57:20.085864  568189 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 20:57:20.094780  568189 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 20:57:20.094863  568189 kubeadm.go:602] duration metric: took 21.689252ms to restartPrimaryControlPlane
	I1217 20:57:20.094889  568189 kubeadm.go:403] duration metric: took 110.88ms to StartCluster
	I1217 20:57:20.094935  568189 settings.go:142] acquiring lock: {Name:mk91fac3a58d91b836cd0701183ef7cc1e571672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:57:20.095035  568189 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:57:20.095784  568189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/kubeconfig: {Name:mkc7214551fa855d6d4575d627c3ecd54291b351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:57:20.096075  568189 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:57:20.096138  568189 start.go:242] waiting for startup goroutines ...
	I1217 20:57:20.096184  568189 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:57:20.097159  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:57:20.102228  568189 out.go:179] * Enabled addons: 
	I1217 20:57:20.105517  568189 addons.go:530] duration metric: took 9.330527ms for enable addons: enabled=[]
	I1217 20:57:20.105608  568189 start.go:247] waiting for cluster config update ...
	I1217 20:57:20.105634  568189 start.go:256] writing updated cluster config ...
	I1217 20:57:20.109046  568189 out.go:203] 
	I1217 20:57:20.112434  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:57:20.112620  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:57:20.116088  568189 out.go:179] * Starting "ha-148567-m02" control-plane node in "ha-148567" cluster
	I1217 20:57:20.119188  568189 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:57:20.122470  568189 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:57:20.125477  568189 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:57:20.125543  568189 cache.go:65] Caching tarball of preloaded images
	I1217 20:57:20.125698  568189 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:57:20.125733  568189 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:57:20.125911  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:57:20.126192  568189 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:57:20.156127  568189 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:57:20.156146  568189 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:57:20.156158  568189 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:57:20.156180  568189 start.go:360] acquireMachinesLock for ha-148567-m02: {Name:mka0efc876c4e4103c7b51199829a59495ed53d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:57:20.156236  568189 start.go:364] duration metric: took 37.022µs to acquireMachinesLock for "ha-148567-m02"
	I1217 20:57:20.156255  568189 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:57:20.156260  568189 fix.go:54] fixHost starting: m02
	I1217 20:57:20.156516  568189 cli_runner.go:164] Run: docker container inspect ha-148567-m02 --format={{.State.Status}}
	I1217 20:57:20.185826  568189 fix.go:112] recreateIfNeeded on ha-148567-m02: state=Stopped err=<nil>
	W1217 20:57:20.185852  568189 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:57:20.189334  568189 out.go:252] * Restarting existing docker container for "ha-148567-m02" ...
	I1217 20:57:20.189427  568189 cli_runner.go:164] Run: docker start ha-148567-m02
	I1217 20:57:20.580145  568189 cli_runner.go:164] Run: docker container inspect ha-148567-m02 --format={{.State.Status}}
	I1217 20:57:20.605573  568189 kic.go:430] container "ha-148567-m02" state is running.
	I1217 20:57:20.605996  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m02
	I1217 20:57:20.637469  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:57:20.637709  568189 machine.go:94] provisionDockerMachine start ...
	I1217 20:57:20.637776  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:20.666081  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:20.666435  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1217 20:57:20.666445  568189 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:57:20.667044  568189 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55606->127.0.0.1:33213: read: connection reset by peer
	I1217 20:57:23.835171  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567-m02
	
	I1217 20:57:23.835247  568189 ubuntu.go:182] provisioning hostname "ha-148567-m02"
	I1217 20:57:23.835352  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:23.865477  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:23.865786  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1217 20:57:23.865799  568189 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-148567-m02 && echo "ha-148567-m02" | sudo tee /etc/hostname
	I1217 20:57:24.081116  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567-m02
	
	I1217 20:57:24.081197  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:24.137190  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:24.137506  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1217 20:57:24.137528  568189 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-148567-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-148567-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-148567-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:57:24.316986  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:57:24.317016  568189 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:57:24.317033  568189 ubuntu.go:190] setting up certificates
	I1217 20:57:24.317049  568189 provision.go:84] configureAuth start
	I1217 20:57:24.317123  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m02
	I1217 20:57:24.367712  568189 provision.go:143] copyHostCerts
	I1217 20:57:24.367760  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:57:24.367793  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:57:24.367807  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:57:24.367891  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:57:24.367990  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:57:24.368036  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:57:24.368044  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:57:24.368085  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:57:24.368162  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:57:24.368206  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:57:24.368214  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:57:24.368237  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:57:24.368289  568189 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.ha-148567-m02 san=[127.0.0.1 192.168.49.3 ha-148567-m02 localhost minikube]
	I1217 20:57:24.734586  568189 provision.go:177] copyRemoteCerts
	I1217 20:57:24.734657  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:57:24.734700  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:24.752816  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m02/id_rsa Username:docker}
	I1217 20:57:24.861032  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 20:57:24.861096  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:57:24.885807  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 20:57:24.885871  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 20:57:24.909744  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 20:57:24.909802  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:57:24.940905  568189 provision.go:87] duration metric: took 623.841925ms to configureAuth
	I1217 20:57:24.940983  568189 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:57:24.941278  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:57:24.941438  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:24.973318  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:24.973626  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1217 20:57:24.973640  568189 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:57:25.394552  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:57:25.394616  568189 machine.go:97] duration metric: took 4.756897721s to provisionDockerMachine
	I1217 20:57:25.394644  568189 start.go:293] postStartSetup for "ha-148567-m02" (driver="docker")
	I1217 20:57:25.394675  568189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:57:25.394774  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:57:25.394857  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:25.413005  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m02/id_rsa Username:docker}
	I1217 20:57:25.507933  568189 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:57:25.511214  568189 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:57:25.511242  568189 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:57:25.511254  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:57:25.511331  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:57:25.511429  568189 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:57:25.511454  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /etc/ssl/certs/4884122.pem
	I1217 20:57:25.511595  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:57:25.519225  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:57:25.536498  568189 start.go:296] duration metric: took 141.821713ms for postStartSetup
	I1217 20:57:25.536594  568189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:57:25.536641  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:25.554701  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m02/id_rsa Username:docker}
	I1217 20:57:25.648875  568189 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:57:25.653908  568189 fix.go:56] duration metric: took 5.497641165s for fixHost
	I1217 20:57:25.653937  568189 start.go:83] releasing machines lock for "ha-148567-m02", held for 5.497692546s
	I1217 20:57:25.654030  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m02
	I1217 20:57:25.674474  568189 out.go:179] * Found network options:
	I1217 20:57:25.677239  568189 out.go:179]   - NO_PROXY=192.168.49.2
	W1217 20:57:25.680103  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 20:57:25.680211  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:57:25.680260  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:57:25.680273  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:57:25.680302  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:57:25.680332  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:57:25.680360  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:57:25.680422  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:57:25.680464  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:25.680483  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:57:25.680501  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:57:25.680526  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:57:25.680594  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:25.699072  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m02/id_rsa Username:docker}
	I1217 20:57:25.806704  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:57:25.825127  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:57:25.843274  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:57:25.850408  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:57:25.858349  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:57:25.866386  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:57:25.870671  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:57:25.870754  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:57:25.912800  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:57:25.920578  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:57:25.928156  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:57:25.935802  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:57:25.939813  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:57:25.939893  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:57:25.984495  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:57:25.993961  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:26.008927  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:57:26.019188  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:26.024558  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:26.024680  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:26.082015  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:57:26.099109  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:57:26.105246  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	W1217 20:57:26.113304  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 20:57:26.113412  568189 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:57:26.113483  568189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:57:26.349285  568189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:57:26.356495  568189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:57:26.356569  568189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:57:26.369266  568189 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:57:26.369291  568189 start.go:496] detecting cgroup driver to use...
	I1217 20:57:26.369323  568189 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:57:26.369374  568189 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:57:26.391970  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:57:26.408218  568189 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:57:26.408282  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:57:26.433162  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:57:26.464579  568189 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:57:26.722421  568189 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:57:27.055410  568189 docker.go:234] disabling docker service ...
	I1217 20:57:27.055512  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:57:27.105418  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:57:27.136492  568189 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:57:27.498616  568189 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:57:27.849231  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:57:27.879943  568189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:57:27.940040  568189 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:57:27.940159  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:27.970284  568189 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:57:27.970406  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:27.993313  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:28.003134  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:28.018148  568189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:57:28.038773  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:28.082030  568189 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:28.095803  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:28.112015  568189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:57:28.129347  568189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:57:28.139870  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:57:28.466945  568189 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:58:58.759793  568189 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.292810635s)
	I1217 20:58:58.759820  568189 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:58:58.759888  568189 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:58:58.764083  568189 start.go:564] Will wait 60s for crictl version
	I1217 20:58:58.764156  568189 ssh_runner.go:195] Run: which crictl
	I1217 20:58:58.767972  568189 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:58:58.795899  568189 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:58:58.796007  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:58:58.827201  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:58:58.863057  568189 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 20:58:58.865958  568189 out.go:179]   - env NO_PROXY=192.168.49.2
	I1217 20:58:58.868926  568189 cli_runner.go:164] Run: docker network inspect ha-148567 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:58:58.886910  568189 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:58:58.891980  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:58:58.903686  568189 mustload.go:66] Loading cluster: ha-148567
	I1217 20:58:58.904009  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:58:58.904332  568189 cli_runner.go:164] Run: docker container inspect ha-148567 --format={{.State.Status}}
	I1217 20:58:58.922016  568189 host.go:66] Checking if "ha-148567" exists ...
	I1217 20:58:58.922335  568189 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567 for IP: 192.168.49.3
	I1217 20:58:58.922347  568189 certs.go:195] generating shared ca certs ...
	I1217 20:58:58.922361  568189 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:58:58.922470  568189 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:58:58.922522  568189 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:58:58.922529  568189 certs.go:257] generating profile certs ...
	I1217 20:58:58.922618  568189 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key
	I1217 20:58:58.922687  568189 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key.1961a769
	I1217 20:58:58.922732  568189 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key
	I1217 20:58:58.922741  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 20:58:58.922754  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 20:58:58.922765  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 20:58:58.922777  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 20:58:58.922787  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 20:58:58.922803  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 20:58:58.922815  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 20:58:58.922825  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 20:58:58.922873  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:58:58.922904  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:58:58.922923  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:58:58.922955  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:58:58.922983  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:58:58.923010  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:58:58.923089  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:58:58.923123  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:58:58.923147  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:58:58.923161  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:58:58.923214  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:58:58.940978  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567/id_rsa Username:docker}
	I1217 20:58:59.031917  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1217 20:58:59.036151  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1217 20:58:59.044650  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1217 20:58:59.048524  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1217 20:58:59.056890  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1217 20:58:59.061264  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1217 20:58:59.070225  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1217 20:58:59.074080  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1217 20:58:59.082761  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1217 20:58:59.086318  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1217 20:58:59.094905  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1217 20:58:59.098892  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1217 20:58:59.107797  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:58:59.130640  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:58:59.150337  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:58:59.170619  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:58:59.190148  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 20:58:59.207919  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:58:59.226715  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:58:59.255397  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:58:59.275249  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:58:59.296360  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:58:59.315496  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:58:59.335711  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1217 20:58:59.351659  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1217 20:58:59.365425  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1217 20:58:59.379095  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1217 20:58:59.403513  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1217 20:58:59.417385  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1217 20:58:59.430972  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1217 20:58:59.445861  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:58:59.452092  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:58:59.460052  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:58:59.467896  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:58:59.471905  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:58:59.472027  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:58:59.513981  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:58:59.521659  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:58:59.529706  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:58:59.537199  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:58:59.541310  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:58:59.541399  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:58:59.585446  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:58:59.592862  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:58:59.600234  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:58:59.608581  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:58:59.612452  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:58:59.612541  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:58:59.653344  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:58:59.661141  568189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:58:59.665238  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:58:59.706455  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:58:59.747808  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:58:59.789584  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:58:59.830635  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:58:59.871901  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:58:59.913067  568189 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.3 crio true true} ...
	I1217 20:58:59.913211  568189 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-148567-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:58:59.913253  568189 kube-vip.go:115] generating kube-vip config ...
	I1217 20:58:59.913314  568189 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 20:58:59.926579  568189 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:58:59.926690  568189 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 20:58:59.926836  568189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:58:59.934802  568189 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:58:59.934923  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1217 20:58:59.942778  568189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 20:58:59.955655  568189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:58:59.968160  568189 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 20:58:59.982401  568189 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 20:58:59.986001  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:58:59.995859  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:00.404474  568189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:59:00.421506  568189 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:59:00.421874  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:00.427429  568189 out.go:179] * Verifying Kubernetes components...
	I1217 20:59:00.430438  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:00.576754  568189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:59:00.591993  568189 kapi.go:59] client config for ha-148567: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1217 20:59:00.592071  568189 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1217 20:59:00.592328  568189 node_ready.go:35] waiting up to 6m0s for node "ha-148567-m02" to be "Ready" ...
	I1217 20:59:07.706331  568189 node_ready.go:49] node "ha-148567-m02" is "Ready"
	I1217 20:59:07.706358  568189 node_ready.go:38] duration metric: took 7.114006977s for node "ha-148567-m02" to be "Ready" ...
	I1217 20:59:07.706371  568189 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:59:07.706429  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:07.728020  568189 api_server.go:72] duration metric: took 7.306463101s to wait for apiserver process to appear ...
	I1217 20:59:07.728044  568189 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:59:07.728063  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:07.763283  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 20:59:07.763309  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 20:59:08.228746  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:08.252676  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:08.252767  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:08.728188  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:08.754073  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:08.754096  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:09.228723  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:09.239736  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:09.239818  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:09.728191  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:09.749211  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:09.749236  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:10.228802  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:10.249826  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:10.249920  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:10.728177  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:10.738376  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:10.738457  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:11.228738  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:11.237435  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:11.237473  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:11.728920  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:11.737210  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:11.737234  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:12.228685  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:12.257584  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:12.257614  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:12.728966  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:12.741760  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:12.741792  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:13.228213  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:13.237780  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:13.237819  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:13.728124  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:13.736267  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:13.736302  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:14.228758  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:14.248460  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:14.248488  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:14.728720  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:14.746850  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:14.746929  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:15.228174  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:15.243044  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:15.243106  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:15.728743  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:15.737606  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:15.737688  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:16.228734  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:16.237829  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:16.237870  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:16.728194  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:16.736702  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:16.736730  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:17.228177  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:17.237306  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1217 20:59:17.238535  568189 api_server.go:141] control plane version: v1.34.3
	I1217 20:59:17.238566  568189 api_server.go:131] duration metric: took 9.510515092s to wait for apiserver health ...
	I1217 20:59:17.238576  568189 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:59:17.244973  568189 system_pods.go:59] 26 kube-system pods found
	I1217 20:59:17.245011  568189 system_pods.go:61] "coredns-66bc5c9577-l8xqv" [e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db] Running
	I1217 20:59:17.245018  568189 system_pods.go:61] "coredns-66bc5c9577-wgcmx" [4fbbda83-a7d2-41c0-98ea-066d493cd483] Running
	I1217 20:59:17.245023  568189 system_pods.go:61] "etcd-ha-148567" [d54c6b05-3d02-4af3-a488-ffe69d55c8c3] Running
	I1217 20:59:17.245027  568189 system_pods.go:61] "etcd-ha-148567-m02" [574da258-f8b2-4230-93a1-028b2c5765bd] Running
	I1217 20:59:17.245031  568189 system_pods.go:61] "etcd-ha-148567-m03" [c1a60f08-76af-48b5-afe6-9389bc0fca8b] Running
	I1217 20:59:17.245034  568189 system_pods.go:61] "kindnet-4xxcs" [3950f031-7524-40a2-aa03-f50e754478ed] Running
	I1217 20:59:17.245038  568189 system_pods.go:61] "kindnet-88zsz" [22238c23-358a-49e1-82d6-ac88a03c654f] Running
	I1217 20:59:17.245042  568189 system_pods.go:61] "kindnet-gwspj" [f5b895d0-8eba-4923-9537-bd07bd57d3b5] Running
	I1217 20:59:17.245046  568189 system_pods.go:61] "kindnet-pv94f" [6135ada6-3e1f-4b1b-a2e2-014a1eb62772] Running
	I1217 20:59:17.245054  568189 system_pods.go:61] "kube-apiserver-ha-148567" [3b830c1a-94be-41e9-bc3a-725b97833055] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:59:17.245060  568189 system_pods.go:61] "kube-apiserver-ha-148567-m02" [0f0e1b1c-b98e-46cd-a683-ce654b6fdeb1] Running
	I1217 20:59:17.245070  568189 system_pods.go:61] "kube-apiserver-ha-148567-m03" [1be33e29-7193-43bb-aaf4-e48cbf50abf9] Running
	I1217 20:59:17.245078  568189 system_pods.go:61] "kube-controller-manager-ha-148567" [8baf857a-5cbe-4de6-9aba-7a244ab2fb08] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:59:17.245086  568189 system_pods.go:61] "kube-controller-manager-ha-148567-m02" [da8d38ea-86a0-4b97-9512-0541ac0d2d44] Running
	I1217 20:59:17.245090  568189 system_pods.go:61] "kube-controller-manager-ha-148567-m03" [360b1069-90d9-45be-a0c3-3c81c88154e0] Running
	I1217 20:59:17.245094  568189 system_pods.go:61] "kube-proxy-8nmpd" [52f8f6d6-8de7-4758-b35e-843a2a6ed562] Running
	I1217 20:59:17.245097  568189 system_pods.go:61] "kube-proxy-9n5cb" [775371d6-bc7d-40f6-8e0f-655f265828ba] Running
	I1217 20:59:17.245101  568189 system_pods.go:61] "kube-proxy-9rv8b" [7e160782-bab5-49b5-950c-377bde3bec7f] Running
	I1217 20:59:17.245109  568189 system_pods.go:61] "kube-proxy-cbk47" [149fd1d8-7762-49fd-81da-23047452dc4a] Running
	I1217 20:59:17.245113  568189 system_pods.go:61] "kube-scheduler-ha-148567" [a04a9bb2-7f73-4940-9952-049c4b406086] Running
	I1217 20:59:17.245124  568189 system_pods.go:61] "kube-scheduler-ha-148567-m02" [534b3ee9-d48c-450d-b20d-308e0a13a720] Running
	I1217 20:59:17.245128  568189 system_pods.go:61] "kube-scheduler-ha-148567-m03" [08f0c7d8-2adc-444e-a8db-5ad41340d101] Running
	I1217 20:59:17.245132  568189 system_pods.go:61] "kube-vip-ha-148567" [05c573d7-3cc9-4d90-8a3e-154ba3c7423a] Running
	I1217 20:59:17.245136  568189 system_pods.go:61] "kube-vip-ha-148567-m02" [4c4de02d-d57f-4a66-983e-12dbdb0f3521] Running
	I1217 20:59:17.245140  568189 system_pods.go:61] "kube-vip-ha-148567-m03" [37a516e3-0f3b-413f-be72-a445c095667f] Running
	I1217 20:59:17.245144  568189 system_pods.go:61] "storage-provisioner" [b613dd7f-cf83-4a6a-a6b8-c7b6282790ab] Running
	I1217 20:59:17.245153  568189 system_pods.go:74] duration metric: took 6.571369ms to wait for pod list to return data ...
	I1217 20:59:17.245166  568189 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:59:17.248376  568189 default_sa.go:45] found service account: "default"
	I1217 20:59:17.248403  568189 default_sa.go:55] duration metric: took 3.23112ms for default service account to be created ...
	I1217 20:59:17.248414  568189 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 20:59:17.254388  568189 system_pods.go:86] 26 kube-system pods found
	I1217 20:59:17.254429  568189 system_pods.go:89] "coredns-66bc5c9577-l8xqv" [e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db] Running
	I1217 20:59:17.254436  568189 system_pods.go:89] "coredns-66bc5c9577-wgcmx" [4fbbda83-a7d2-41c0-98ea-066d493cd483] Running
	I1217 20:59:17.254441  568189 system_pods.go:89] "etcd-ha-148567" [d54c6b05-3d02-4af3-a488-ffe69d55c8c3] Running
	I1217 20:59:17.254445  568189 system_pods.go:89] "etcd-ha-148567-m02" [574da258-f8b2-4230-93a1-028b2c5765bd] Running
	I1217 20:59:17.254450  568189 system_pods.go:89] "etcd-ha-148567-m03" [c1a60f08-76af-48b5-afe6-9389bc0fca8b] Running
	I1217 20:59:17.254454  568189 system_pods.go:89] "kindnet-4xxcs" [3950f031-7524-40a2-aa03-f50e754478ed] Running
	I1217 20:59:17.254458  568189 system_pods.go:89] "kindnet-88zsz" [22238c23-358a-49e1-82d6-ac88a03c654f] Running
	I1217 20:59:17.254464  568189 system_pods.go:89] "kindnet-gwspj" [f5b895d0-8eba-4923-9537-bd07bd57d3b5] Running
	I1217 20:59:17.254471  568189 system_pods.go:89] "kindnet-pv94f" [6135ada6-3e1f-4b1b-a2e2-014a1eb62772] Running
	I1217 20:59:17.254478  568189 system_pods.go:89] "kube-apiserver-ha-148567" [3b830c1a-94be-41e9-bc3a-725b97833055] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:59:17.254487  568189 system_pods.go:89] "kube-apiserver-ha-148567-m02" [0f0e1b1c-b98e-46cd-a683-ce654b6fdeb1] Running
	I1217 20:59:17.254493  568189 system_pods.go:89] "kube-apiserver-ha-148567-m03" [1be33e29-7193-43bb-aaf4-e48cbf50abf9] Running
	I1217 20:59:17.254506  568189 system_pods.go:89] "kube-controller-manager-ha-148567" [8baf857a-5cbe-4de6-9aba-7a244ab2fb08] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:59:17.254511  568189 system_pods.go:89] "kube-controller-manager-ha-148567-m02" [da8d38ea-86a0-4b97-9512-0541ac0d2d44] Running
	I1217 20:59:17.254523  568189 system_pods.go:89] "kube-controller-manager-ha-148567-m03" [360b1069-90d9-45be-a0c3-3c81c88154e0] Running
	I1217 20:59:17.254527  568189 system_pods.go:89] "kube-proxy-8nmpd" [52f8f6d6-8de7-4758-b35e-843a2a6ed562] Running
	I1217 20:59:17.254531  568189 system_pods.go:89] "kube-proxy-9n5cb" [775371d6-bc7d-40f6-8e0f-655f265828ba] Running
	I1217 20:59:17.254535  568189 system_pods.go:89] "kube-proxy-9rv8b" [7e160782-bab5-49b5-950c-377bde3bec7f] Running
	I1217 20:59:17.254539  568189 system_pods.go:89] "kube-proxy-cbk47" [149fd1d8-7762-49fd-81da-23047452dc4a] Running
	I1217 20:59:17.254544  568189 system_pods.go:89] "kube-scheduler-ha-148567" [a04a9bb2-7f73-4940-9952-049c4b406086] Running
	I1217 20:59:17.254548  568189 system_pods.go:89] "kube-scheduler-ha-148567-m02" [534b3ee9-d48c-450d-b20d-308e0a13a720] Running
	I1217 20:59:17.254554  568189 system_pods.go:89] "kube-scheduler-ha-148567-m03" [08f0c7d8-2adc-444e-a8db-5ad41340d101] Running
	I1217 20:59:17.254558  568189 system_pods.go:89] "kube-vip-ha-148567" [05c573d7-3cc9-4d90-8a3e-154ba3c7423a] Running
	I1217 20:59:17.254564  568189 system_pods.go:89] "kube-vip-ha-148567-m02" [4c4de02d-d57f-4a66-983e-12dbdb0f3521] Running
	I1217 20:59:17.254568  568189 system_pods.go:89] "kube-vip-ha-148567-m03" [37a516e3-0f3b-413f-be72-a445c095667f] Running
	I1217 20:59:17.254574  568189 system_pods.go:89] "storage-provisioner" [b613dd7f-cf83-4a6a-a6b8-c7b6282790ab] Running
	I1217 20:59:17.254581  568189 system_pods.go:126] duration metric: took 6.162224ms to wait for k8s-apps to be running ...
	I1217 20:59:17.254602  568189 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 20:59:17.254663  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:59:17.268613  568189 system_svc.go:56] duration metric: took 13.999372ms WaitForService to wait for kubelet
	I1217 20:59:17.268642  568189 kubeadm.go:587] duration metric: took 16.847089867s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:59:17.268661  568189 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:59:17.272882  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:17.272914  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:17.272927  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:17.272933  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:17.272955  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:17.272965  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:17.272970  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:17.272974  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:17.272990  568189 node_conditions.go:105] duration metric: took 4.323407ms to run NodePressure ...
	I1217 20:59:17.273004  568189 start.go:242] waiting for startup goroutines ...
	I1217 20:59:17.273044  568189 start.go:256] writing updated cluster config ...
	I1217 20:59:17.276641  568189 out.go:203] 
	I1217 20:59:17.279823  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:17.279977  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:59:17.283346  568189 out.go:179] * Starting "ha-148567-m03" control-plane node in "ha-148567" cluster
	I1217 20:59:17.287005  568189 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:59:17.289900  568189 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:59:17.292694  568189 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:59:17.292719  568189 cache.go:65] Caching tarball of preloaded images
	I1217 20:59:17.292773  568189 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:59:17.292856  568189 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:59:17.292875  568189 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:59:17.293025  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:59:17.316772  568189 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:59:17.316795  568189 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:59:17.316808  568189 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:59:17.316834  568189 start.go:360] acquireMachinesLock for ha-148567-m03: {Name:mk79ac9edce64d0e8c2ded9c9074a2bd7d2b5d55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:59:17.316888  568189 start.go:364] duration metric: took 38.95µs to acquireMachinesLock for "ha-148567-m03"
	I1217 20:59:17.316913  568189 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:59:17.316918  568189 fix.go:54] fixHost starting: m03
	I1217 20:59:17.317283  568189 cli_runner.go:164] Run: docker container inspect ha-148567-m03 --format={{.State.Status}}
	I1217 20:59:17.334541  568189 fix.go:112] recreateIfNeeded on ha-148567-m03: state=Stopped err=<nil>
	W1217 20:59:17.334574  568189 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:59:17.337913  568189 out.go:252] * Restarting existing docker container for "ha-148567-m03" ...
	I1217 20:59:17.337998  568189 cli_runner.go:164] Run: docker start ha-148567-m03
	I1217 20:59:17.630601  568189 cli_runner.go:164] Run: docker container inspect ha-148567-m03 --format={{.State.Status}}
	I1217 20:59:17.661698  568189 kic.go:430] container "ha-148567-m03" state is running.
	I1217 20:59:17.662070  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m03
	I1217 20:59:17.697058  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:59:17.697290  568189 machine.go:94] provisionDockerMachine start ...
	I1217 20:59:17.697346  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:17.735501  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:17.735872  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1217 20:59:17.735883  568189 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:59:17.736599  568189 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33978->127.0.0.1:33218: read: connection reset by peer
	I1217 20:59:20.923505  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567-m03
	
	I1217 20:59:20.923622  568189 ubuntu.go:182] provisioning hostname "ha-148567-m03"
	I1217 20:59:20.923718  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:20.957211  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:20.957509  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1217 20:59:20.957520  568189 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-148567-m03 && echo "ha-148567-m03" | sudo tee /etc/hostname
	I1217 20:59:21.165423  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567-m03
	
	I1217 20:59:21.165574  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:21.192963  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:21.193292  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1217 20:59:21.193313  568189 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-148567-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-148567-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-148567-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:59:21.368432  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:59:21.368455  568189 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:59:21.368471  568189 ubuntu.go:190] setting up certificates
	I1217 20:59:21.368480  568189 provision.go:84] configureAuth start
	I1217 20:59:21.368545  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m03
	I1217 20:59:21.396285  568189 provision.go:143] copyHostCerts
	I1217 20:59:21.396333  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:59:21.396368  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:59:21.396381  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:59:21.396464  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:59:21.396552  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:59:21.396575  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:59:21.396586  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:59:21.396614  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:59:21.396662  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:59:21.396683  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:59:21.396693  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:59:21.396721  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:59:21.396774  568189 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.ha-148567-m03 san=[127.0.0.1 192.168.49.4 ha-148567-m03 localhost minikube]
	I1217 20:59:21.571429  568189 provision.go:177] copyRemoteCerts
	I1217 20:59:21.571550  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:59:21.571647  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:21.594363  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m03/id_rsa Username:docker}
	I1217 20:59:21.708000  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 20:59:21.708057  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:59:21.741918  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 20:59:21.741984  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:59:21.772491  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 20:59:21.772556  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 20:59:21.816467  568189 provision.go:87] duration metric: took 447.972227ms to configureAuth
	I1217 20:59:21.816545  568189 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:59:21.816837  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:21.816991  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:21.842199  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:21.842497  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1217 20:59:21.842510  568189 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:59:23.388796  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:59:23.388873  568189 machine.go:97] duration metric: took 5.691572483s to provisionDockerMachine
	I1217 20:59:23.388901  568189 start.go:293] postStartSetup for "ha-148567-m03" (driver="docker")
	I1217 20:59:23.388945  568189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:59:23.389048  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:59:23.389125  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:23.407539  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m03/id_rsa Username:docker}
	I1217 20:59:23.504717  568189 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:59:23.508445  568189 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:59:23.508475  568189 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:59:23.508497  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:59:23.508554  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:59:23.508641  568189 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:59:23.508652  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /etc/ssl/certs/4884122.pem
	I1217 20:59:23.508753  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:59:23.516893  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:59:23.537776  568189 start.go:296] duration metric: took 148.841829ms for postStartSetup
	I1217 20:59:23.537865  568189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:59:23.537922  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:23.556786  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m03/id_rsa Username:docker}
	I1217 20:59:23.652766  568189 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:59:23.658117  568189 fix.go:56] duration metric: took 6.341191994s for fixHost
	I1217 20:59:23.658141  568189 start.go:83] releasing machines lock for "ha-148567-m03", held for 6.341239765s
	I1217 20:59:23.658236  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m03
	I1217 20:59:23.679391  568189 out.go:179] * Found network options:
	I1217 20:59:23.682308  568189 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1217 20:59:23.685317  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 20:59:23.685349  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 20:59:23.685436  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:59:23.685484  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:59:23.685498  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:59:23.685532  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:59:23.685564  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:59:23.685595  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:59:23.685643  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:59:23.685680  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:59:23.685700  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:59:23.685712  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:23.685732  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:59:23.685785  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:23.704133  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m03/id_rsa Username:docker}
	I1217 20:59:23.825155  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:59:23.849401  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:59:23.873252  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:59:23.884717  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:59:23.894872  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:59:23.906983  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:59:23.912255  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:59:23.912326  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:59:23.985078  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:59:23.994724  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:59:24.026915  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:59:24.068192  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:59:24.080822  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:59:24.080947  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:59:24.182542  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:59:24.200285  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:24.222177  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:59:24.235700  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:24.244507  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:24.244617  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:24.320887  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:59:24.336685  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:59:24.350359  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	W1217 20:59:24.358402  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 20:59:24.358481  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 20:59:24.358586  568189 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:59:24.358716  568189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:59:24.592070  568189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:59:24.599441  568189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:59:24.599517  568189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:59:24.610713  568189 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:59:24.610738  568189 start.go:496] detecting cgroup driver to use...
	I1217 20:59:24.610768  568189 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:59:24.610821  568189 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:59:24.642252  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:59:24.667730  568189 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:59:24.667804  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:59:24.701389  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:59:24.736876  568189 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:59:25.009438  568189 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:59:25.297427  568189 docker.go:234] disabling docker service ...
	I1217 20:59:25.297496  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:59:25.322653  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:59:25.339124  568189 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:59:25.552070  568189 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:59:25.758562  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:59:25.777883  568189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:59:25.800345  568189 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:59:25.800419  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:25.816339  568189 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:59:25.816411  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:25.826969  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:25.836513  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:25.846534  568189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:59:25.856329  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:25.866346  568189 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:25.875696  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:25.885875  568189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:59:25.894536  568189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:59:25.903937  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:26.158009  568189 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:59:27.447640  568189 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.289596192s)
	I1217 20:59:27.447667  568189 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:59:27.447742  568189 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:59:27.451909  568189 start.go:564] Will wait 60s for crictl version
	I1217 20:59:27.452022  568189 ssh_runner.go:195] Run: which crictl
	I1217 20:59:27.455782  568189 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:59:27.480696  568189 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:59:27.480875  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:59:27.511380  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:59:27.545667  568189 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 20:59:27.548725  568189 out.go:179]   - env NO_PROXY=192.168.49.2
	I1217 20:59:27.551654  568189 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1217 20:59:27.554631  568189 cli_runner.go:164] Run: docker network inspect ha-148567 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:59:27.569507  568189 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:59:27.573575  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:59:27.583348  568189 mustload.go:66] Loading cluster: ha-148567
	I1217 20:59:27.583685  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:27.583957  568189 cli_runner.go:164] Run: docker container inspect ha-148567 --format={{.State.Status}}
	I1217 20:59:27.602103  568189 host.go:66] Checking if "ha-148567" exists ...
	I1217 20:59:27.603047  568189 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567 for IP: 192.168.49.4
	I1217 20:59:27.603066  568189 certs.go:195] generating shared ca certs ...
	I1217 20:59:27.603090  568189 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:59:27.603216  568189 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:59:27.603263  568189 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:59:27.603274  568189 certs.go:257] generating profile certs ...
	I1217 20:59:27.603376  568189 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key
	I1217 20:59:27.603463  568189 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key.3b1ba341
	I1217 20:59:27.603515  568189 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key
	I1217 20:59:27.603530  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 20:59:27.603543  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 20:59:27.603558  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 20:59:27.603572  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 20:59:27.603621  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 20:59:27.603634  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 20:59:27.603645  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 20:59:27.603655  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 20:59:27.603709  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:59:27.603744  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:59:27.603756  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:59:27.603782  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:59:27.603813  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:59:27.603839  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:59:27.603886  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:59:27.603922  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:27.603937  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:59:27.603948  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:59:27.604007  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:59:27.622811  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567/id_rsa Username:docker}
	I1217 20:59:27.711932  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1217 20:59:27.715648  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1217 20:59:27.723761  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1217 20:59:27.727209  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1217 20:59:27.735381  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1217 20:59:27.738998  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1217 20:59:27.747188  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1217 20:59:27.750785  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1217 20:59:27.758913  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1217 20:59:27.762427  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1217 20:59:27.770856  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1217 20:59:27.774347  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1217 20:59:27.782918  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:59:27.807233  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:59:27.825936  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:59:27.843705  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:59:27.863259  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 20:59:27.883764  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:59:27.904255  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:59:27.951575  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:59:27.979511  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:59:28.010041  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:59:28.032795  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:59:28.058120  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1217 20:59:28.072480  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1217 20:59:28.096660  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1217 20:59:28.111050  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1217 20:59:28.125599  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1217 20:59:28.139988  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1217 20:59:28.154668  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1217 20:59:28.168340  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:59:28.174792  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:28.182440  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:59:28.191221  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:28.195516  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:28.195766  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:28.244735  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:59:28.252179  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:59:28.259686  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:59:28.270202  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:59:28.274707  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:59:28.274826  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:59:28.316566  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:59:28.324532  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:59:28.331852  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:59:28.344147  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:59:28.349920  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:59:28.350026  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:59:28.397463  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:59:28.405538  568189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:59:28.409482  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:59:28.452939  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:59:28.494338  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:59:28.540466  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:59:28.582836  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:59:28.624131  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:59:28.667766  568189 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.3 crio true true} ...
	I1217 20:59:28.667874  568189 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-148567-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:59:28.667909  568189 kube-vip.go:115] generating kube-vip config ...
	I1217 20:59:28.667967  568189 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 20:59:28.681456  568189 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:59:28.681523  568189 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 20:59:28.681593  568189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:59:28.689896  568189 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:59:28.689971  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1217 20:59:28.697831  568189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 20:59:28.713126  568189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:59:28.729184  568189 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 20:59:28.745530  568189 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 20:59:28.749870  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:59:28.762032  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:28.899317  568189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:59:28.916505  568189 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:59:28.916882  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:28.921876  568189 out.go:179] * Verifying Kubernetes components...
	I1217 20:59:28.924845  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:29.067107  568189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:59:29.082388  568189 kapi.go:59] client config for ha-148567: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1217 20:59:29.082463  568189 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1217 20:59:29.082744  568189 node_ready.go:35] waiting up to 6m0s for node "ha-148567-m03" to be "Ready" ...
	I1217 20:59:29.086184  568189 node_ready.go:49] node "ha-148567-m03" is "Ready"
	I1217 20:59:29.086213  568189 node_ready.go:38] duration metric: took 3.444045ms for node "ha-148567-m03" to be "Ready" ...
	I1217 20:59:29.086226  568189 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:59:29.086308  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:29.587146  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:30.086424  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:30.587043  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:31.087307  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:31.587125  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:32.087199  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:32.586440  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:33.087014  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:33.587262  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:34.086776  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:34.586785  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:35.086598  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:35.587225  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:36.087060  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:36.587238  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:37.087356  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:37.586962  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:38.086425  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:38.587186  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:39.086440  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:39.587206  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:40.087337  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:40.586682  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:41.086960  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:41.587321  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:42.087299  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:42.587074  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:43.086416  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:43.100960  568189 api_server.go:72] duration metric: took 14.18440701s to wait for apiserver process to appear ...
	I1217 20:59:43.100982  568189 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:59:43.101000  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:43.111943  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1217 20:59:43.113605  568189 api_server.go:141] control plane version: v1.34.3
	I1217 20:59:43.113627  568189 api_server.go:131] duration metric: took 12.639438ms to wait for apiserver health ...
	I1217 20:59:43.113635  568189 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:59:43.122498  568189 system_pods.go:59] 26 kube-system pods found
	I1217 20:59:43.122587  568189 system_pods.go:61] "coredns-66bc5c9577-l8xqv" [e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db] Running
	I1217 20:59:43.122609  568189 system_pods.go:61] "coredns-66bc5c9577-wgcmx" [4fbbda83-a7d2-41c0-98ea-066d493cd483] Running
	I1217 20:59:43.122628  568189 system_pods.go:61] "etcd-ha-148567" [d54c6b05-3d02-4af3-a488-ffe69d55c8c3] Running
	I1217 20:59:43.122660  568189 system_pods.go:61] "etcd-ha-148567-m02" [574da258-f8b2-4230-93a1-028b2c5765bd] Running
	I1217 20:59:43.122680  568189 system_pods.go:61] "etcd-ha-148567-m03" [c1a60f08-76af-48b5-afe6-9389bc0fca8b] Running
	I1217 20:59:43.122700  568189 system_pods.go:61] "kindnet-4xxcs" [3950f031-7524-40a2-aa03-f50e754478ed] Running
	I1217 20:59:43.122719  568189 system_pods.go:61] "kindnet-88zsz" [22238c23-358a-49e1-82d6-ac88a03c654f] Running
	I1217 20:59:43.122747  568189 system_pods.go:61] "kindnet-gwspj" [f5b895d0-8eba-4923-9537-bd07bd57d3b5] Running
	I1217 20:59:43.122769  568189 system_pods.go:61] "kindnet-pv94f" [6135ada6-3e1f-4b1b-a2e2-014a1eb62772] Running
	I1217 20:59:43.122787  568189 system_pods.go:61] "kube-apiserver-ha-148567" [3b830c1a-94be-41e9-bc3a-725b97833055] Running
	I1217 20:59:43.122807  568189 system_pods.go:61] "kube-apiserver-ha-148567-m02" [0f0e1b1c-b98e-46cd-a683-ce654b6fdeb1] Running
	I1217 20:59:43.122827  568189 system_pods.go:61] "kube-apiserver-ha-148567-m03" [1be33e29-7193-43bb-aaf4-e48cbf50abf9] Running
	I1217 20:59:43.122857  568189 system_pods.go:61] "kube-controller-manager-ha-148567" [8baf857a-5cbe-4de6-9aba-7a244ab2fb08] Running
	I1217 20:59:43.122886  568189 system_pods.go:61] "kube-controller-manager-ha-148567-m02" [da8d38ea-86a0-4b97-9512-0541ac0d2d44] Running
	I1217 20:59:43.122906  568189 system_pods.go:61] "kube-controller-manager-ha-148567-m03" [360b1069-90d9-45be-a0c3-3c81c88154e0] Running
	I1217 20:59:43.122929  568189 system_pods.go:61] "kube-proxy-8nmpd" [52f8f6d6-8de7-4758-b35e-843a2a6ed562] Running
	I1217 20:59:43.122960  568189 system_pods.go:61] "kube-proxy-9n5cb" [775371d6-bc7d-40f6-8e0f-655f265828ba] Running
	I1217 20:59:43.122982  568189 system_pods.go:61] "kube-proxy-9rv8b" [7e160782-bab5-49b5-950c-377bde3bec7f] Running
	I1217 20:59:43.123002  568189 system_pods.go:61] "kube-proxy-cbk47" [149fd1d8-7762-49fd-81da-23047452dc4a] Running
	I1217 20:59:43.123021  568189 system_pods.go:61] "kube-scheduler-ha-148567" [a04a9bb2-7f73-4940-9952-049c4b406086] Running
	I1217 20:59:43.123040  568189 system_pods.go:61] "kube-scheduler-ha-148567-m02" [534b3ee9-d48c-450d-b20d-308e0a13a720] Running
	I1217 20:59:43.123071  568189 system_pods.go:61] "kube-scheduler-ha-148567-m03" [08f0c7d8-2adc-444e-a8db-5ad41340d101] Running
	I1217 20:59:43.123099  568189 system_pods.go:61] "kube-vip-ha-148567" [05c573d7-3cc9-4d90-8a3e-154ba3c7423a] Running
	I1217 20:59:43.123129  568189 system_pods.go:61] "kube-vip-ha-148567-m02" [4c4de02d-d57f-4a66-983e-12dbdb0f3521] Running
	I1217 20:59:43.123149  568189 system_pods.go:61] "kube-vip-ha-148567-m03" [37a516e3-0f3b-413f-be72-a445c095667f] Running
	I1217 20:59:43.123176  568189 system_pods.go:61] "storage-provisioner" [b613dd7f-cf83-4a6a-a6b8-c7b6282790ab] Running
	I1217 20:59:43.123204  568189 system_pods.go:74] duration metric: took 9.561362ms to wait for pod list to return data ...
	I1217 20:59:43.123228  568189 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:59:43.126857  568189 default_sa.go:45] found service account: "default"
	I1217 20:59:43.126922  568189 default_sa.go:55] duration metric: took 3.673226ms for default service account to be created ...
	I1217 20:59:43.126952  568189 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 20:59:43.134811  568189 system_pods.go:86] 26 kube-system pods found
	I1217 20:59:43.134893  568189 system_pods.go:89] "coredns-66bc5c9577-l8xqv" [e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db] Running
	I1217 20:59:43.134915  568189 system_pods.go:89] "coredns-66bc5c9577-wgcmx" [4fbbda83-a7d2-41c0-98ea-066d493cd483] Running
	I1217 20:59:43.134937  568189 system_pods.go:89] "etcd-ha-148567" [d54c6b05-3d02-4af3-a488-ffe69d55c8c3] Running
	I1217 20:59:43.134966  568189 system_pods.go:89] "etcd-ha-148567-m02" [574da258-f8b2-4230-93a1-028b2c5765bd] Running
	I1217 20:59:43.134990  568189 system_pods.go:89] "etcd-ha-148567-m03" [c1a60f08-76af-48b5-afe6-9389bc0fca8b] Running
	I1217 20:59:43.135010  568189 system_pods.go:89] "kindnet-4xxcs" [3950f031-7524-40a2-aa03-f50e754478ed] Running
	I1217 20:59:43.135031  568189 system_pods.go:89] "kindnet-88zsz" [22238c23-358a-49e1-82d6-ac88a03c654f] Running
	I1217 20:59:43.135052  568189 system_pods.go:89] "kindnet-gwspj" [f5b895d0-8eba-4923-9537-bd07bd57d3b5] Running
	I1217 20:59:43.135081  568189 system_pods.go:89] "kindnet-pv94f" [6135ada6-3e1f-4b1b-a2e2-014a1eb62772] Running
	I1217 20:59:43.135118  568189 system_pods.go:89] "kube-apiserver-ha-148567" [3b830c1a-94be-41e9-bc3a-725b97833055] Running
	I1217 20:59:43.135138  568189 system_pods.go:89] "kube-apiserver-ha-148567-m02" [0f0e1b1c-b98e-46cd-a683-ce654b6fdeb1] Running
	I1217 20:59:43.135160  568189 system_pods.go:89] "kube-apiserver-ha-148567-m03" [1be33e29-7193-43bb-aaf4-e48cbf50abf9] Running
	I1217 20:59:43.135194  568189 system_pods.go:89] "kube-controller-manager-ha-148567" [8baf857a-5cbe-4de6-9aba-7a244ab2fb08] Running
	I1217 20:59:43.135222  568189 system_pods.go:89] "kube-controller-manager-ha-148567-m02" [da8d38ea-86a0-4b97-9512-0541ac0d2d44] Running
	I1217 20:59:43.135243  568189 system_pods.go:89] "kube-controller-manager-ha-148567-m03" [360b1069-90d9-45be-a0c3-3c81c88154e0] Running
	I1217 20:59:43.135263  568189 system_pods.go:89] "kube-proxy-8nmpd" [52f8f6d6-8de7-4758-b35e-843a2a6ed562] Running
	I1217 20:59:43.135283  568189 system_pods.go:89] "kube-proxy-9n5cb" [775371d6-bc7d-40f6-8e0f-655f265828ba] Running
	I1217 20:59:43.135311  568189 system_pods.go:89] "kube-proxy-9rv8b" [7e160782-bab5-49b5-950c-377bde3bec7f] Running
	I1217 20:59:43.135338  568189 system_pods.go:89] "kube-proxy-cbk47" [149fd1d8-7762-49fd-81da-23047452dc4a] Running
	I1217 20:59:43.135357  568189 system_pods.go:89] "kube-scheduler-ha-148567" [a04a9bb2-7f73-4940-9952-049c4b406086] Running
	I1217 20:59:43.135375  568189 system_pods.go:89] "kube-scheduler-ha-148567-m02" [534b3ee9-d48c-450d-b20d-308e0a13a720] Running
	I1217 20:59:43.135394  568189 system_pods.go:89] "kube-scheduler-ha-148567-m03" [08f0c7d8-2adc-444e-a8db-5ad41340d101] Running
	I1217 20:59:43.135423  568189 system_pods.go:89] "kube-vip-ha-148567" [05c573d7-3cc9-4d90-8a3e-154ba3c7423a] Running
	I1217 20:59:43.135455  568189 system_pods.go:89] "kube-vip-ha-148567-m02" [4c4de02d-d57f-4a66-983e-12dbdb0f3521] Running
	I1217 20:59:43.135477  568189 system_pods.go:89] "kube-vip-ha-148567-m03" [37a516e3-0f3b-413f-be72-a445c095667f] Running
	I1217 20:59:43.135495  568189 system_pods.go:89] "storage-provisioner" [b613dd7f-cf83-4a6a-a6b8-c7b6282790ab] Running
	I1217 20:59:43.135529  568189 system_pods.go:126] duration metric: took 8.54658ms to wait for k8s-apps to be running ...
	I1217 20:59:43.135556  568189 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 20:59:43.135647  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:59:43.150029  568189 system_svc.go:56] duration metric: took 14.465953ms WaitForService to wait for kubelet
	I1217 20:59:43.150071  568189 kubeadm.go:587] duration metric: took 14.233522691s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:59:43.150090  568189 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:59:43.154561  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:43.154592  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:43.154613  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:43.154619  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:43.154624  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:43.154628  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:43.154641  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:43.154646  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:43.154651  568189 node_conditions.go:105] duration metric: took 4.555345ms to run NodePressure ...
	I1217 20:59:43.154681  568189 start.go:242] waiting for startup goroutines ...
	I1217 20:59:43.154709  568189 start.go:256] writing updated cluster config ...
	I1217 20:59:43.158527  568189 out.go:203] 
	I1217 20:59:43.161746  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:43.161871  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:59:43.165329  568189 out.go:179] * Starting "ha-148567-m04" worker node in "ha-148567" cluster
	I1217 20:59:43.168355  568189 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:59:43.171262  568189 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:59:43.174132  568189 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:59:43.174410  568189 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:59:43.174454  568189 cache.go:65] Caching tarball of preloaded images
	I1217 20:59:43.174570  568189 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:59:43.174613  568189 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:59:43.174766  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:59:43.198461  568189 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:59:43.198481  568189 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:59:43.198493  568189 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:59:43.198516  568189 start.go:360] acquireMachinesLock for ha-148567-m04: {Name:mk553b42915df9bd549a5c28a2faaee12bc3aaa4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:59:43.198572  568189 start.go:364] duration metric: took 34.134µs to acquireMachinesLock for "ha-148567-m04"
	I1217 20:59:43.198597  568189 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:59:43.198602  568189 fix.go:54] fixHost starting: m04
	I1217 20:59:43.198879  568189 cli_runner.go:164] Run: docker container inspect ha-148567-m04 --format={{.State.Status}}
	I1217 20:59:43.217750  568189 fix.go:112] recreateIfNeeded on ha-148567-m04: state=Stopped err=<nil>
	W1217 20:59:43.217781  568189 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:59:43.221013  568189 out.go:252] * Restarting existing docker container for "ha-148567-m04" ...
	I1217 20:59:43.221102  568189 cli_runner.go:164] Run: docker start ha-148567-m04
	I1217 20:59:43.516797  568189 cli_runner.go:164] Run: docker container inspect ha-148567-m04 --format={{.State.Status}}
	I1217 20:59:43.540017  568189 kic.go:430] container "ha-148567-m04" state is running.
	I1217 20:59:43.540568  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m04
	I1217 20:59:43.574859  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:59:43.575129  568189 machine.go:94] provisionDockerMachine start ...
	I1217 20:59:43.575199  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:43.606726  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:43.607040  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1217 20:59:43.607056  568189 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:59:43.607773  568189 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58242->127.0.0.1:33223: read: connection reset by peer
	I1217 20:59:46.803819  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567-m04
	
	I1217 20:59:46.803848  568189 ubuntu.go:182] provisioning hostname "ha-148567-m04"
	I1217 20:59:46.803941  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:46.836537  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:46.836852  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1217 20:59:46.836874  568189 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-148567-m04 && echo "ha-148567-m04" | sudo tee /etc/hostname
	I1217 20:59:47.026899  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567-m04
	
	I1217 20:59:47.027037  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:47.062751  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:47.063061  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1217 20:59:47.063082  568189 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-148567-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-148567-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-148567-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:59:47.256926  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:59:47.257018  568189 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:59:47.257283  568189 ubuntu.go:190] setting up certificates
	I1217 20:59:47.257314  568189 provision.go:84] configureAuth start
	I1217 20:59:47.257398  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m04
	I1217 20:59:47.295834  568189 provision.go:143] copyHostCerts
	I1217 20:59:47.295877  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:59:47.295912  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:59:47.295919  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:59:47.296003  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:59:47.296090  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:59:47.296108  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:59:47.296113  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:59:47.296139  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:59:47.296196  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:59:47.296215  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:59:47.296219  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:59:47.296250  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:59:47.296313  568189 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.ha-148567-m04 san=[127.0.0.1 192.168.49.5 ha-148567-m04 localhost minikube]
	I1217 20:59:47.379272  568189 provision.go:177] copyRemoteCerts
	I1217 20:59:47.379345  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:59:47.379394  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:47.403843  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m04/id_rsa Username:docker}
	I1217 20:59:47.518369  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 20:59:47.518441  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:59:47.576564  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 20:59:47.576687  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 20:59:47.604142  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 20:59:47.604201  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:59:47.631334  568189 provision.go:87] duration metric: took 373.991006ms to configureAuth
	I1217 20:59:47.631359  568189 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:59:47.631685  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:47.631793  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:47.657183  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:47.657502  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1217 20:59:47.657518  568189 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:59:48.158234  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:59:48.158306  568189 machine.go:97] duration metric: took 4.583160847s to provisionDockerMachine
	I1217 20:59:48.158332  568189 start.go:293] postStartSetup for "ha-148567-m04" (driver="docker")
	I1217 20:59:48.158359  568189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:59:48.158470  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:59:48.158549  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:48.182261  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m04/id_rsa Username:docker}
	I1217 20:59:48.298135  568189 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:59:48.311846  568189 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:59:48.311884  568189 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:59:48.311907  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:59:48.311974  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:59:48.312067  568189 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:59:48.312079  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /etc/ssl/certs/4884122.pem
	I1217 20:59:48.312200  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:59:48.329656  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:59:48.373531  568189 start.go:296] duration metric: took 215.167593ms for postStartSetup
	I1217 20:59:48.373663  568189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:59:48.373725  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:48.400005  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m04/id_rsa Username:docker}
	I1217 20:59:48.502218  568189 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:59:48.508483  568189 fix.go:56] duration metric: took 5.309874613s for fixHost
	I1217 20:59:48.508507  568189 start.go:83] releasing machines lock for "ha-148567-m04", held for 5.309926708s
	I1217 20:59:48.508573  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m04
	I1217 20:59:48.542166  568189 out.go:179] * Found network options:
	I1217 20:59:48.545031  568189 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1217 20:59:48.547822  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 20:59:48.547865  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 20:59:48.547882  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 20:59:48.547964  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:59:48.548007  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:59:48.548015  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:59:48.548043  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:59:48.548068  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:59:48.548092  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:59:48.548135  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:59:48.548169  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:59:48.548185  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:48.548196  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:59:48.548214  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:59:48.548266  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:48.578677  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m04/id_rsa Username:docker}
	I1217 20:59:48.719848  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:59:48.753882  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:59:48.792107  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:59:48.804085  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:59:48.816313  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:59:48.832761  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:59:48.840746  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:59:48.840863  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:59:48.902488  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:59:48.912364  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:59:48.923914  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:59:48.940092  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:59:48.947071  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:59:48.947150  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:59:49.021813  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:59:49.034659  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:49.053384  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:59:49.069859  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:49.077887  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:49.078004  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:49.137254  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:59:49.153091  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:59:49.159186  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	W1217 20:59:49.165011  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 20:59:49.165053  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 20:59:49.165063  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 20:59:49.165151  568189 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:59:49.165273  568189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:59:49.359347  568189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:59:49.368376  568189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:59:49.368491  568189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:59:49.391939  568189 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:59:49.392014  568189 start.go:496] detecting cgroup driver to use...
	I1217 20:59:49.392069  568189 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:59:49.392143  568189 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:59:49.427410  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:59:49.445092  568189 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:59:49.445199  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:59:49.463345  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:59:49.480078  568189 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:59:49.663757  568189 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:59:49.840193  568189 docker.go:234] disabling docker service ...
	I1217 20:59:49.840317  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:59:49.860557  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:59:49.877087  568189 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:59:50.055711  568189 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:59:50.231385  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:59:50.254028  568189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:59:50.285776  568189 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:59:50.285901  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:50.299125  568189 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:59:50.299249  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:50.308719  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:50.317674  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:50.326552  568189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:59:50.334774  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:50.343683  568189 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:50.357610  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:50.371978  568189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:59:50.381012  568189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:59:50.389890  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:50.573931  568189 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:59:50.817600  568189 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:59:50.817730  568189 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:59:50.823707  568189 start.go:564] Will wait 60s for crictl version
	I1217 20:59:50.823823  568189 ssh_runner.go:195] Run: which crictl
	I1217 20:59:50.829375  568189 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:59:50.907046  568189 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:59:50.907198  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:59:50.968526  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:59:51.022232  568189 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 20:59:51.025095  568189 out.go:179]   - env NO_PROXY=192.168.49.2
	I1217 20:59:51.028040  568189 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1217 20:59:51.031031  568189 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1217 20:59:51.033982  568189 cli_runner.go:164] Run: docker network inspect ha-148567 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:59:51.058290  568189 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:59:51.064756  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:59:51.084472  568189 mustload.go:66] Loading cluster: ha-148567
	I1217 20:59:51.084822  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:51.085173  568189 cli_runner.go:164] Run: docker container inspect ha-148567 --format={{.State.Status}}
	I1217 20:59:51.122113  568189 host.go:66] Checking if "ha-148567" exists ...
	I1217 20:59:51.122410  568189 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567 for IP: 192.168.49.5
	I1217 20:59:51.122425  568189 certs.go:195] generating shared ca certs ...
	I1217 20:59:51.122444  568189 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:59:51.122555  568189 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:59:51.122603  568189 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:59:51.122617  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 20:59:51.122638  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 20:59:51.122649  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 20:59:51.122665  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 20:59:51.122723  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:59:51.122759  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:59:51.122771  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:59:51.122798  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:59:51.122830  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:59:51.122855  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:59:51.122904  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:59:51.122943  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:59:51.122961  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:51.122973  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:59:51.122997  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:59:51.146685  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:59:51.175270  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:59:51.202157  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:59:51.226103  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:59:51.248874  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:59:51.269857  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:59:51.310997  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:59:51.319341  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:59:51.330020  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:59:51.339343  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:59:51.350841  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:59:51.350957  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:59:51.400605  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:59:51.414512  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:59:51.424023  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:59:51.432640  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:59:51.437401  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:59:51.437481  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:59:51.482765  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:59:51.491449  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:51.501741  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:59:51.515339  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:51.520544  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:51.520666  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:51.565528  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:59:51.574279  568189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:59:51.579195  568189 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 20:59:51.579288  568189 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.3  false true} ...
	I1217 20:59:51.579397  568189 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-148567-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:59:51.579514  568189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:59:51.588520  568189 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:59:51.588644  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1217 20:59:51.600506  568189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 20:59:51.617987  568189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:59:51.637341  568189 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 20:59:51.641707  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:59:51.653386  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:51.824077  568189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:59:51.843148  568189 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1217 20:59:51.843522  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:51.848815  568189 out.go:179] * Verifying Kubernetes components...
	I1217 20:59:51.852560  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:51.982897  568189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:59:52.000066  568189 kapi.go:59] client config for ha-148567: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1217 20:59:52.000192  568189 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1217 20:59:52.000451  568189 node_ready.go:35] waiting up to 6m0s for node "ha-148567-m04" to be "Ready" ...
	I1217 20:59:52.006183  568189 node_ready.go:49] node "ha-148567-m04" is "Ready"
	I1217 20:59:52.006239  568189 node_ready.go:38] duration metric: took 5.759781ms for node "ha-148567-m04" to be "Ready" ...
	I1217 20:59:52.006258  568189 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 20:59:52.006601  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:59:52.047225  568189 system_svc.go:56] duration metric: took 40.959365ms WaitForService to wait for kubelet
	I1217 20:59:52.047255  568189 kubeadm.go:587] duration metric: took 203.674646ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:59:52.047276  568189 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:59:52.051902  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:52.051946  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:52.051960  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:52.051980  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:52.051986  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:52.051991  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:52.052000  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:52.052005  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:52.052015  568189 node_conditions.go:105] duration metric: took 4.734079ms to run NodePressure ...
	I1217 20:59:52.052027  568189 start.go:242] waiting for startup goroutines ...
	I1217 20:59:52.052063  568189 start.go:256] writing updated cluster config ...
	I1217 20:59:52.052403  568189 ssh_runner.go:195] Run: rm -f paused
	I1217 20:59:52.057083  568189 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:59:52.057721  568189 kapi.go:59] client config for ha-148567: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:59:52.075282  568189 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-l8xqv" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:52.083372  568189 pod_ready.go:94] pod "coredns-66bc5c9577-l8xqv" is "Ready"
	I1217 20:59:52.083403  568189 pod_ready.go:86] duration metric: took 8.086341ms for pod "coredns-66bc5c9577-l8xqv" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:52.083414  568189 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wgcmx" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:52.104642  568189 pod_ready.go:94] pod "coredns-66bc5c9577-wgcmx" is "Ready"
	I1217 20:59:52.104676  568189 pod_ready.go:86] duration metric: took 21.254359ms for pod "coredns-66bc5c9577-wgcmx" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:52.108222  568189 pod_ready.go:83] waiting for pod "etcd-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:52.114067  568189 pod_ready.go:94] pod "etcd-ha-148567" is "Ready"
	I1217 20:59:52.114095  568189 pod_ready.go:86] duration metric: took 5.843992ms for pod "etcd-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:52.114104  568189 pod_ready.go:83] waiting for pod "etcd-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 20:59:54.121101  568189 pod_ready.go:104] pod "etcd-ha-148567-m02" is not "Ready", error: <nil>
	W1217 20:59:56.121594  568189 pod_ready.go:104] pod "etcd-ha-148567-m02" is not "Ready", error: <nil>
	I1217 20:59:58.129487  568189 pod_ready.go:94] pod "etcd-ha-148567-m02" is "Ready"
	I1217 20:59:58.129512  568189 pod_ready.go:86] duration metric: took 6.015400557s for pod "etcd-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.129523  568189 pod_ready.go:83] waiting for pod "etcd-ha-148567-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.142269  568189 pod_ready.go:94] pod "etcd-ha-148567-m03" is "Ready"
	I1217 20:59:58.142292  568189 pod_ready.go:86] duration metric: took 12.762885ms for pod "etcd-ha-148567-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.146453  568189 pod_ready.go:83] waiting for pod "kube-apiserver-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.164280  568189 pod_ready.go:94] pod "kube-apiserver-ha-148567" is "Ready"
	I1217 20:59:58.164356  568189 pod_ready.go:86] duration metric: took 17.878983ms for pod "kube-apiserver-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.164381  568189 pod_ready.go:83] waiting for pod "kube-apiserver-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.259174  568189 request.go:683] "Waited before sending request" delay="88.189794ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m02"
	I1217 20:59:58.268569  568189 pod_ready.go:94] pod "kube-apiserver-ha-148567-m02" is "Ready"
	I1217 20:59:58.268593  568189 pod_ready.go:86] duration metric: took 104.192931ms for pod "kube-apiserver-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.268603  568189 pod_ready.go:83] waiting for pod "kube-apiserver-ha-148567-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.458982  568189 request.go:683] "Waited before sending request" delay="190.303242ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-148567-m03"
	I1217 20:59:58.658315  568189 request.go:683] "Waited before sending request" delay="195.215539ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m03"
	I1217 20:59:58.661689  568189 pod_ready.go:94] pod "kube-apiserver-ha-148567-m03" is "Ready"
	I1217 20:59:58.661723  568189 pod_ready.go:86] duration metric: took 393.113399ms for pod "kube-apiserver-ha-148567-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.859073  568189 request.go:683] "Waited before sending request" delay="197.228659ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1217 20:59:58.863798  568189 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:59.059209  568189 request.go:683] "Waited before sending request" delay="195.315815ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-148567"
	I1217 20:59:59.258903  568189 request.go:683] "Waited before sending request" delay="196.340082ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567"
	I1217 20:59:59.265017  568189 pod_ready.go:94] pod "kube-controller-manager-ha-148567" is "Ready"
	I1217 20:59:59.265041  568189 pod_ready.go:86] duration metric: took 401.217693ms for pod "kube-controller-manager-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:59.265051  568189 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:59.458390  568189 request.go:683] "Waited before sending request" delay="193.253489ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-148567-m02"
	I1217 20:59:59.658551  568189 request.go:683] "Waited before sending request" delay="180.126333ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m02"
	I1217 20:59:59.662062  568189 pod_ready.go:94] pod "kube-controller-manager-ha-148567-m02" is "Ready"
	I1217 20:59:59.662093  568189 pod_ready.go:86] duration metric: took 397.034758ms for pod "kube-controller-manager-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:59.662104  568189 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-148567-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:59.858282  568189 request.go:683] "Waited before sending request" delay="196.102269ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-148567-m03"
	I1217 21:00:00.075408  568189 request.go:683] "Waited before sending request" delay="213.781913ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m03"
	I1217 21:00:00.089107  568189 pod_ready.go:94] pod "kube-controller-manager-ha-148567-m03" is "Ready"
	I1217 21:00:00.089136  568189 pod_ready.go:86] duration metric: took 427.024958ms for pod "kube-controller-manager-ha-148567-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:00.258516  568189 request.go:683] "Waited before sending request" delay="169.272025ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1217 21:00:00.322743  568189 pod_ready.go:83] waiting for pod "kube-proxy-8nmpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:00.459982  568189 request.go:683] "Waited before sending request" delay="137.098152ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8nmpd"
	I1217 21:00:00.701120  568189 pod_ready.go:94] pod "kube-proxy-8nmpd" is "Ready"
	I1217 21:00:00.701146  568189 pod_ready.go:86] duration metric: took 378.365284ms for pod "kube-proxy-8nmpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:00.701157  568189 pod_ready.go:83] waiting for pod "kube-proxy-9n5cb" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:00.858493  568189 request.go:683] "Waited before sending request" delay="157.248259ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9n5cb"
	I1217 21:00:01.058920  568189 request.go:683] "Waited before sending request" delay="150.537073ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567"
	I1217 21:00:01.068198  568189 pod_ready.go:94] pod "kube-proxy-9n5cb" is "Ready"
	I1217 21:00:01.068230  568189 pod_ready.go:86] duration metric: took 367.062133ms for pod "kube-proxy-9n5cb" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:01.068243  568189 pod_ready.go:83] waiting for pod "kube-proxy-9rv8b" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:01.262645  568189 request.go:683] "Waited before sending request" delay="194.315293ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9rv8b"
	I1217 21:00:01.458640  568189 request.go:683] "Waited before sending request" delay="153.080094ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m04"
	I1217 21:00:01.462978  568189 pod_ready.go:94] pod "kube-proxy-9rv8b" is "Ready"
	I1217 21:00:01.463012  568189 pod_ready.go:86] duration metric: took 394.75948ms for pod "kube-proxy-9rv8b" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:01.463024  568189 pod_ready.go:83] waiting for pod "kube-proxy-cbk47" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:01.658301  568189 request.go:683] "Waited before sending request" delay="195.184202ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cbk47"
	I1217 21:00:01.858277  568189 request.go:683] "Waited before sending request" delay="195.25946ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m02"
	I1217 21:00:01.862378  568189 pod_ready.go:94] pod "kube-proxy-cbk47" is "Ready"
	I1217 21:00:01.862409  568189 pod_ready.go:86] duration metric: took 399.37762ms for pod "kube-proxy-cbk47" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:02.058910  568189 request.go:683] "Waited before sending request" delay="196.359519ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1217 21:00:02.063347  568189 pod_ready.go:83] waiting for pod "kube-scheduler-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:02.258828  568189 request.go:683] "Waited before sending request" delay="195.344917ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-148567"
	I1217 21:00:02.458794  568189 request.go:683] "Waited before sending request" delay="192.303347ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567"
	I1217 21:00:02.462249  568189 pod_ready.go:94] pod "kube-scheduler-ha-148567" is "Ready"
	I1217 21:00:02.462330  568189 pod_ready.go:86] duration metric: took 398.949995ms for pod "kube-scheduler-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:02.462347  568189 pod_ready.go:83] waiting for pod "kube-scheduler-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:02.658751  568189 request.go:683] "Waited before sending request" delay="196.3297ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-148567-m02"
	I1217 21:00:02.858900  568189 request.go:683] "Waited before sending request" delay="196.191697ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m02"
	I1217 21:00:03.058800  568189 request.go:683] "Waited before sending request" delay="96.270325ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-148567-m02"
	I1217 21:00:03.258609  568189 request.go:683] "Waited before sending request" delay="196.310803ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m02"
	I1217 21:00:03.658820  568189 request.go:683] "Waited before sending request" delay="192.320766ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m02"
	I1217 21:00:04.059107  568189 request.go:683] "Waited before sending request" delay="91.269847ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m02"
	W1217 21:00:04.473348  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:06.969463  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:08.970426  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:11.469067  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:13.469840  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:15.969240  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:17.970193  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:20.472073  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:22.968559  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:24.969719  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:26.969862  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:29.470421  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:31.972330  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:34.469131  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:36.470941  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:38.970444  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:41.469557  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:43.469705  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:45.969149  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:47.969777  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:50.469751  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:52.969483  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:54.969568  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:57.468587  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:59.469765  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:01.470220  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:03.968803  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:05.969289  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:07.970839  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:10.469532  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:12.470536  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:14.968677  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:16.969870  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:19.469773  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:21.473506  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:23.970699  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:26.469423  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:28.470176  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:30.970041  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:33.468708  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:35.470792  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:37.470979  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:39.969393  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:41.971168  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:43.973569  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:46.469101  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:48.469649  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:50.469830  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:52.969858  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:55.468819  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:57.469502  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:59.473027  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:01.969273  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:03.970006  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:06.469903  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:08.470528  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:10.969500  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:12.969708  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:15.469498  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:17.969560  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:20.471040  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:22.970398  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:25.470111  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:27.969892  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:30.470124  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:32.969858  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:34.970684  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:36.970849  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:39.468689  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:41.469503  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:43.969114  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:45.969652  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:47.970284  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:50.469486  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:52.469974  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:54.470624  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:56.969815  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:59.469488  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:01.469627  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:03.970512  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:06.469961  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:08.969174  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:10.969626  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:12.970730  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:15.469047  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:17.470130  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:19.473448  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:21.969933  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:23.970894  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:26.470713  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:28.968830  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:30.970218  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:33.468960  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:35.469770  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:37.968748  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:39.968975  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:41.969305  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:44.468880  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:46.469851  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:48.968886  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:50.969624  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	I1217 21:03:52.057311  568189 pod_ready.go:86] duration metric: took 3m49.59494638s for pod "kube-scheduler-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 21:03:52.057351  568189 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-scheduler" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1217 21:03:52.057365  568189 pod_ready.go:40] duration metric: took 4m0.000201029s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 21:03:52.060383  568189 out.go:203] 
	W1217 21:03:52.063300  568189 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1217 21:03:52.066188  568189 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-arm64 -p ha-148567 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 node list --alsologtostderr -v 5
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-148567
helpers_test.go:244: (dbg) docker inspect ha-148567:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08",
	        "Created": "2025-12-17T20:52:31.092462673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 568318,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:57:11.629499973Z",
	            "FinishedAt": "2025-12-17T20:57:11.031529194Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08/hostname",
	        "HostsPath": "/var/lib/docker/containers/88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08/hosts",
	        "LogPath": "/var/lib/docker/containers/88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08/88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08-json.log",
	        "Name": "/ha-148567",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-148567:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-148567",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08",
	                "LowerDir": "/var/lib/docker/overlay2/8e673472741feb79114d21cdb1133ef1554ec9c1de94e1353239170ab1e99ffe-init/diff:/var/lib/docker/overlay2/c4759b344ecb109c83d66e4ef56c76903f1d1e597efb86ab2d2c7911b1130a8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e673472741feb79114d21cdb1133ef1554ec9c1de94e1353239170ab1e99ffe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e673472741feb79114d21cdb1133ef1554ec9c1de94e1353239170ab1e99ffe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e673472741feb79114d21cdb1133ef1554ec9c1de94e1353239170ab1e99ffe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-148567",
	                "Source": "/var/lib/docker/volumes/ha-148567/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-148567",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-148567",
	                "name.minikube.sigs.k8s.io": "ha-148567",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8bec05c72f8070026c65a234cc2234c9ed60a9d48a73ed7980f988d165d7313b",
	            "SandboxKey": "/var/run/docker/netns/8bec05c72f80",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33208"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33209"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33212"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33210"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33211"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-148567": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:1c:9e:71:58:20",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "254979ff9069c22f3a569b8e9b07ed4381f262395f3bf61c458fcf6159449939",
	                    "EndpointID": "557d101a90f45ff33539072d9ea1e4592c6793c9d7ee55f08be852661aa35e13",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-148567",
	                        "88230c4afd3a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-148567 -n ha-148567
helpers_test.go:253: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-148567 logs -n 25: (1.820168793s)
helpers_test.go:261: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-148567 cp ha-148567-m03:/home/docker/cp-test.txt ha-148567-m02:/home/docker/cp-test_ha-148567-m03_ha-148567-m02.txt               │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567-m02 sudo cat /home/docker/cp-test_ha-148567-m03_ha-148567-m02.txt                                         │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ cp      │ ha-148567 cp ha-148567-m03:/home/docker/cp-test.txt ha-148567-m04:/home/docker/cp-test_ha-148567-m03_ha-148567-m04.txt               │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567-m04 sudo cat /home/docker/cp-test_ha-148567-m03_ha-148567-m04.txt                                         │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ cp      │ ha-148567 cp testdata/cp-test.txt ha-148567-m04:/home/docker/cp-test.txt                                                             │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ cp      │ ha-148567 cp ha-148567-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3374009435/001/cp-test_ha-148567-m04.txt │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ cp      │ ha-148567 cp ha-148567-m04:/home/docker/cp-test.txt ha-148567:/home/docker/cp-test_ha-148567-m04_ha-148567.txt                       │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567 sudo cat /home/docker/cp-test_ha-148567-m04_ha-148567.txt                                                 │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ cp      │ ha-148567 cp ha-148567-m04:/home/docker/cp-test.txt ha-148567-m02:/home/docker/cp-test_ha-148567-m04_ha-148567-m02.txt               │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567-m02 sudo cat /home/docker/cp-test_ha-148567-m04_ha-148567-m02.txt                                         │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ cp      │ ha-148567 cp ha-148567-m04:/home/docker/cp-test.txt ha-148567-m03:/home/docker/cp-test_ha-148567-m04_ha-148567-m03.txt               │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567-m03 sudo cat /home/docker/cp-test_ha-148567-m04_ha-148567-m03.txt                                         │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ node    │ ha-148567 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:56 UTC │
	│ node    │ ha-148567 node start m02 --alsologtostderr -v 5                                                                                      │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:56 UTC │ 17 Dec 25 20:56 UTC │
	│ node    │ ha-148567 node list --alsologtostderr -v 5                                                                                           │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:56 UTC │                     │
	│ stop    │ ha-148567 stop --alsologtostderr -v 5                                                                                                │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:56 UTC │ 17 Dec 25 20:57 UTC │
	│ start   │ ha-148567 start --wait true --alsologtostderr -v 5                                                                                   │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:57 UTC │                     │
	│ node    │ ha-148567 node list --alsologtostderr -v 5                                                                                           │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 21:03 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:57:11
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:57:11.358859  568189 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:57:11.359079  568189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:57:11.359113  568189 out.go:374] Setting ErrFile to fd 2...
	I1217 20:57:11.359134  568189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:57:11.359399  568189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:57:11.359857  568189 out.go:368] Setting JSON to false
	I1217 20:57:11.360732  568189 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13181,"bootTime":1765991851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:57:11.360834  568189 start.go:143] virtualization:  
	I1217 20:57:11.366162  568189 out.go:179] * [ha-148567] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:57:11.369165  568189 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:57:11.369340  568189 notify.go:221] Checking for updates...
	I1217 20:57:11.372773  568189 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:57:11.376038  568189 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:57:11.378993  568189 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:57:11.381848  568189 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:57:11.384979  568189 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:57:11.388367  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:57:11.388514  568189 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:57:11.413210  568189 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:57:11.413329  568189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:57:11.470988  568189 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-17 20:57:11.461612355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:57:11.471099  568189 docker.go:319] overlay module found
	I1217 20:57:11.474237  568189 out.go:179] * Using the docker driver based on existing profile
	I1217 20:57:11.477144  568189 start.go:309] selected driver: docker
	I1217 20:57:11.477166  568189 start.go:927] validating driver "docker" against &{Name:ha-148567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:57:11.477308  568189 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:57:11.477418  568189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:57:11.541431  568189 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-17 20:57:11.532691865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:57:11.541848  568189 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:57:11.541879  568189 cni.go:84] Creating CNI manager for ""
	I1217 20:57:11.541937  568189 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1217 20:57:11.541988  568189 start.go:353] cluster config:
	{Name:ha-148567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:57:11.546865  568189 out.go:179] * Starting "ha-148567" primary control-plane node in "ha-148567" cluster
	I1217 20:57:11.549690  568189 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:57:11.552597  568189 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:57:11.555352  568189 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:57:11.555402  568189 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1217 20:57:11.555416  568189 cache.go:65] Caching tarball of preloaded images
	I1217 20:57:11.555437  568189 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:57:11.555506  568189 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:57:11.555517  568189 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:57:11.555734  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:57:11.574595  568189 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:57:11.574619  568189 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:57:11.574640  568189 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:57:11.574675  568189 start.go:360] acquireMachinesLock for ha-148567: {Name:mkeea083db7bee665ba841ae2b673f302d3ac8a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:57:11.574737  568189 start.go:364] duration metric: took 37.949µs to acquireMachinesLock for "ha-148567"
	I1217 20:57:11.574761  568189 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:57:11.574767  568189 fix.go:54] fixHost starting: 
	I1217 20:57:11.575046  568189 cli_runner.go:164] Run: docker container inspect ha-148567 --format={{.State.Status}}
	I1217 20:57:11.592879  568189 fix.go:112] recreateIfNeeded on ha-148567: state=Stopped err=<nil>
	W1217 20:57:11.592909  568189 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:57:11.596175  568189 out.go:252] * Restarting existing docker container for "ha-148567" ...
	I1217 20:57:11.596256  568189 cli_runner.go:164] Run: docker start ha-148567
	I1217 20:57:11.847065  568189 cli_runner.go:164] Run: docker container inspect ha-148567 --format={{.State.Status}}
	I1217 20:57:11.870185  568189 kic.go:430] container "ha-148567" state is running.
	I1217 20:57:11.870824  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567
	I1217 20:57:11.897361  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:57:11.897594  568189 machine.go:94] provisionDockerMachine start ...
	I1217 20:57:11.897659  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:11.920598  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:11.920937  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1217 20:57:11.920945  568189 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:57:11.923893  568189 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47488->127.0.0.1:33208: read: connection reset by peer
	I1217 20:57:15.067633  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567
	
	I1217 20:57:15.067656  568189 ubuntu.go:182] provisioning hostname "ha-148567"
	I1217 20:57:15.067737  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:15.086692  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:15.087056  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1217 20:57:15.087069  568189 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-148567 && echo "ha-148567" | sudo tee /etc/hostname
	I1217 20:57:15.229459  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567
	
	I1217 20:57:15.229547  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:15.248113  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:15.248429  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1217 20:57:15.248448  568189 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-148567' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-148567/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-148567' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:57:15.380233  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:57:15.380256  568189 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:57:15.380318  568189 ubuntu.go:190] setting up certificates
	I1217 20:57:15.380340  568189 provision.go:84] configureAuth start
	I1217 20:57:15.380427  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567
	I1217 20:57:15.398346  568189 provision.go:143] copyHostCerts
	I1217 20:57:15.398396  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:57:15.398436  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:57:15.398443  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:57:15.398519  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:57:15.398610  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:57:15.398628  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:57:15.398632  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:57:15.398658  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:57:15.398706  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:57:15.398722  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:57:15.398725  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:57:15.398748  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:57:15.398801  568189 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.ha-148567 san=[127.0.0.1 192.168.49.2 ha-148567 localhost minikube]
	I1217 20:57:16.169383  568189 provision.go:177] copyRemoteCerts
	I1217 20:57:16.169461  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:57:16.169502  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:16.187039  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567/id_rsa Username:docker}
	I1217 20:57:16.287499  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 20:57:16.287563  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:57:16.305548  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 20:57:16.305623  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1217 20:57:16.324256  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 20:57:16.324318  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:57:16.342494  568189 provision.go:87] duration metric: took 962.127276ms to configureAuth
	I1217 20:57:16.342522  568189 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:57:16.342771  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:57:16.342894  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:16.360548  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:16.360872  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1217 20:57:16.360886  568189 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:57:16.731877  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:57:16.731907  568189 machine.go:97] duration metric: took 4.834303602s to provisionDockerMachine
	I1217 20:57:16.731920  568189 start.go:293] postStartSetup for "ha-148567" (driver="docker")
	I1217 20:57:16.731930  568189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:57:16.732002  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:57:16.732081  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:16.754210  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567/id_rsa Username:docker}
	I1217 20:57:16.847793  568189 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:57:16.851353  568189 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:57:16.851380  568189 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:57:16.851393  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:57:16.851448  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:57:16.851530  568189 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:57:16.851537  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /etc/ssl/certs/4884122.pem
	I1217 20:57:16.851668  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:57:16.859497  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:57:16.877490  568189 start.go:296] duration metric: took 145.555245ms for postStartSetup
	I1217 20:57:16.877573  568189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:57:16.877619  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:16.895083  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567/id_rsa Username:docker}
	I1217 20:57:16.988718  568189 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:57:16.993912  568189 fix.go:56] duration metric: took 5.419138386s for fixHost
	I1217 20:57:16.993941  568189 start.go:83] releasing machines lock for "ha-148567", held for 5.419189965s
	I1217 20:57:16.994013  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567
	I1217 20:57:17.015130  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:57:17.015192  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:57:17.015202  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:57:17.015243  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:57:17.015276  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:57:17.015305  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:57:17.015359  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:57:17.015397  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:57:17.015413  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:57:17.015426  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:17.015449  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:57:17.015509  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:17.032525  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567/id_rsa Username:docker}
	I1217 20:57:17.141021  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:57:17.158347  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:57:17.175988  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:57:17.182704  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:57:17.190121  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:57:17.197484  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:57:17.201345  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:57:17.201430  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:57:17.242261  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:57:17.249871  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:17.257360  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:57:17.265311  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:17.269162  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:17.269230  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:17.310484  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:57:17.317908  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:57:17.325039  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:57:17.332443  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:57:17.336120  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:57:17.336229  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:57:17.377375  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:57:17.384997  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:57:17.388508  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 20:57:17.392154  568189 ssh_runner.go:195] Run: cat /version.json
	I1217 20:57:17.392265  568189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:57:17.396842  568189 ssh_runner.go:195] Run: systemctl --version
	I1217 20:57:17.490250  568189 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:57:17.526620  568189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:57:17.531388  568189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:57:17.531464  568189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:57:17.539341  568189 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:57:17.539367  568189 start.go:496] detecting cgroup driver to use...
	I1217 20:57:17.539398  568189 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:57:17.539448  568189 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:57:17.554515  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:57:17.567414  568189 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:57:17.567477  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:57:17.582837  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:57:17.596146  568189 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:57:17.711761  568189 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:57:17.824951  568189 docker.go:234] disabling docker service ...
	I1217 20:57:17.825056  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:57:17.839370  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:57:17.852221  568189 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:57:17.978299  568189 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:57:18.106183  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:57:18.119265  568189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:57:18.135218  568189 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:57:18.135286  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:18.144824  568189 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:57:18.144911  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:18.153531  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:18.162007  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:18.170781  568189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:57:18.178861  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:18.188770  568189 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:18.197027  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:18.205801  568189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:57:18.213338  568189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:57:18.220373  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:57:18.339982  568189 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:57:18.523093  568189 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:57:18.523169  568189 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:57:18.526796  568189 start.go:564] Will wait 60s for crictl version
	I1217 20:57:18.526868  568189 ssh_runner.go:195] Run: which crictl
	I1217 20:57:18.530299  568189 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:57:18.553630  568189 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:57:18.553755  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:57:18.582651  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:57:18.616862  568189 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 20:57:18.619814  568189 cli_runner.go:164] Run: docker network inspect ha-148567 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:57:18.635997  568189 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:57:18.639815  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:57:18.649441  568189 kubeadm.go:884] updating cluster {Name:ha-148567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:57:18.649590  568189 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:57:18.649659  568189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:57:18.684542  568189 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:57:18.684566  568189 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:57:18.684622  568189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:57:18.710185  568189 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:57:18.710209  568189 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:57:18.710218  568189 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1217 20:57:18.710314  568189 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-148567 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:57:18.710393  568189 ssh_runner.go:195] Run: crio config
	I1217 20:57:18.788945  568189 cni.go:84] Creating CNI manager for ""
	I1217 20:57:18.788969  568189 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1217 20:57:18.788980  568189 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:57:18.789006  568189 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-148567 NodeName:ha-148567 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:57:18.789146  568189 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-148567"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:57:18.789173  568189 kube-vip.go:115] generating kube-vip config ...
	I1217 20:57:18.789228  568189 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 20:57:18.801220  568189 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:57:18.801319  568189 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 20:57:18.801387  568189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:57:18.809265  568189 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:57:18.809341  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1217 20:57:18.816975  568189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1217 20:57:18.830189  568189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:57:18.843133  568189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1217 20:57:18.856384  568189 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 20:57:18.870226  568189 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 20:57:18.873999  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:57:18.883854  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:57:18.997472  568189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:57:19.014260  568189 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567 for IP: 192.168.49.2
	I1217 20:57:19.014282  568189 certs.go:195] generating shared ca certs ...
	I1217 20:57:19.014306  568189 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:57:19.014456  568189 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:57:19.014513  568189 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:57:19.014526  568189 certs.go:257] generating profile certs ...
	I1217 20:57:19.014605  568189 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key
	I1217 20:57:19.014640  568189 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key.235ee9c5
	I1217 20:57:19.014654  568189 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt.235ee9c5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1217 20:57:19.118946  568189 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt.235ee9c5 ...
	I1217 20:57:19.118983  568189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt.235ee9c5: {Name:mk1086942903d0f4fe5882a203e756f5bb8d0e75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:57:19.119164  568189 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key.235ee9c5 ...
	I1217 20:57:19.119181  568189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key.235ee9c5: {Name:mk80ca03d9af9f78d1f49f30dce3d5755dc5ecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:57:19.119259  568189 certs.go:382] copying /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt.235ee9c5 -> /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt
	I1217 20:57:19.119408  568189 certs.go:386] copying /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key.235ee9c5 -> /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key
	I1217 20:57:19.119551  568189 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key
	I1217 20:57:19.119572  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 20:57:19.120309  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 20:57:19.120337  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 20:57:19.120353  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 20:57:19.120372  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 20:57:19.120396  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 20:57:19.120412  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 20:57:19.120422  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 20:57:19.120480  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:57:19.120520  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:57:19.120532  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:57:19.120558  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:57:19.120587  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:57:19.120618  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:57:19.120667  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:57:19.120705  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:19.120722  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:57:19.120734  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:57:19.121259  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:57:19.145342  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:57:19.172402  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:57:19.199952  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:57:19.221869  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 20:57:19.249229  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:57:19.272832  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:57:19.291834  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:57:19.311373  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:57:19.330971  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:57:19.351692  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:57:19.371686  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:57:19.386168  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:57:19.392617  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:57:19.400115  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:57:19.407811  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:57:19.411925  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:57:19.411990  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:57:19.458050  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:57:19.465417  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:19.472749  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:57:19.480441  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:19.484121  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:19.484184  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:19.525126  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:57:19.532547  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:57:19.539760  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:57:19.547227  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:57:19.551344  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:57:19.551429  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:57:19.592800  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:57:19.600200  568189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:57:19.604024  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:57:19.651875  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:57:19.709469  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:57:19.756552  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:57:19.821907  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:57:19.909301  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:57:19.984018  568189 kubeadm.go:401] StartCluster: {Name:ha-148567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:57:19.984266  568189 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:57:19.984364  568189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:57:20.031672  568189 cri.go:89] found id: "023b1c530d5224ef13b091e8f631aeb894024192e8f5534cf29c773714cf0197"
	I1217 20:57:20.031748  568189 cri.go:89] found id: "7b48eea7424a1e799bb5102aad672e4089e73d5c20382c2df99a7acabddf99d2"
	I1217 20:57:20.031769  568189 cri.go:89] found id: "055c04d40b9a0b3de2fc113e6e93106a29a67f711d7609c5bdc735d261688c9e"
	I1217 20:57:20.031790  568189 cri.go:89] found id: "4f2a8a504377b01cbe43d291e9fa7cd514647d2cf31a4b90042b71653d4272df"
	I1217 20:57:20.031827  568189 cri.go:89] found id: "0273f065d6acfc2f5b1353496b1c10bb1409bb5cd6154db0859cb71f3d44d9a6"
	I1217 20:57:20.031852  568189 cri.go:89] found id: ""
	I1217 20:57:20.031944  568189 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 20:57:20.059831  568189 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:57:20Z" level=error msg="open /run/runc: no such file or directory"
	I1217 20:57:20.059961  568189 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:57:20.073057  568189 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 20:57:20.073134  568189 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 20:57:20.073239  568189 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 20:57:20.082317  568189 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:57:20.082916  568189 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-148567" does not appear in /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:57:20.083118  568189 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-485134/kubeconfig needs updating (will repair): [kubeconfig missing "ha-148567" cluster setting kubeconfig missing "ha-148567" context setting]
	I1217 20:57:20.083494  568189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/kubeconfig: {Name:mkc7214551fa855d6d4575d627c3ecd54291b351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:57:20.084509  568189 kapi.go:59] client config for ha-148567: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:57:20.085228  568189 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 20:57:20.085297  568189 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 20:57:20.085375  568189 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 20:57:20.085406  568189 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 20:57:20.085443  568189 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 20:57:20.085470  568189 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 20:57:20.085864  568189 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 20:57:20.094780  568189 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 20:57:20.094863  568189 kubeadm.go:602] duration metric: took 21.689252ms to restartPrimaryControlPlane
	I1217 20:57:20.094889  568189 kubeadm.go:403] duration metric: took 110.88ms to StartCluster
	I1217 20:57:20.094935  568189 settings.go:142] acquiring lock: {Name:mk91fac3a58d91b836cd0701183ef7cc1e571672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:57:20.095035  568189 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:57:20.095784  568189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/kubeconfig: {Name:mkc7214551fa855d6d4575d627c3ecd54291b351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:57:20.096075  568189 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:57:20.096138  568189 start.go:242] waiting for startup goroutines ...
	I1217 20:57:20.096184  568189 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:57:20.097159  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:57:20.102228  568189 out.go:179] * Enabled addons: 
	I1217 20:57:20.105517  568189 addons.go:530] duration metric: took 9.330527ms for enable addons: enabled=[]
	I1217 20:57:20.105608  568189 start.go:247] waiting for cluster config update ...
	I1217 20:57:20.105634  568189 start.go:256] writing updated cluster config ...
	I1217 20:57:20.109046  568189 out.go:203] 
	I1217 20:57:20.112434  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:57:20.112620  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:57:20.116088  568189 out.go:179] * Starting "ha-148567-m02" control-plane node in "ha-148567" cluster
	I1217 20:57:20.119188  568189 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:57:20.122470  568189 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:57:20.125477  568189 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:57:20.125543  568189 cache.go:65] Caching tarball of preloaded images
	I1217 20:57:20.125698  568189 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:57:20.125733  568189 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:57:20.125911  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:57:20.126192  568189 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:57:20.156127  568189 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:57:20.156146  568189 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:57:20.156158  568189 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:57:20.156180  568189 start.go:360] acquireMachinesLock for ha-148567-m02: {Name:mka0efc876c4e4103c7b51199829a59495ed53d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:57:20.156236  568189 start.go:364] duration metric: took 37.022µs to acquireMachinesLock for "ha-148567-m02"
	I1217 20:57:20.156255  568189 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:57:20.156260  568189 fix.go:54] fixHost starting: m02
	I1217 20:57:20.156516  568189 cli_runner.go:164] Run: docker container inspect ha-148567-m02 --format={{.State.Status}}
	I1217 20:57:20.185826  568189 fix.go:112] recreateIfNeeded on ha-148567-m02: state=Stopped err=<nil>
	W1217 20:57:20.185852  568189 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:57:20.189334  568189 out.go:252] * Restarting existing docker container for "ha-148567-m02" ...
	I1217 20:57:20.189427  568189 cli_runner.go:164] Run: docker start ha-148567-m02
	I1217 20:57:20.580145  568189 cli_runner.go:164] Run: docker container inspect ha-148567-m02 --format={{.State.Status}}
	I1217 20:57:20.605573  568189 kic.go:430] container "ha-148567-m02" state is running.
	I1217 20:57:20.605996  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m02
	I1217 20:57:20.637469  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:57:20.637709  568189 machine.go:94] provisionDockerMachine start ...
	I1217 20:57:20.637776  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:20.666081  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:20.666435  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1217 20:57:20.666445  568189 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:57:20.667044  568189 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55606->127.0.0.1:33213: read: connection reset by peer
	I1217 20:57:23.835171  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567-m02
	
	I1217 20:57:23.835247  568189 ubuntu.go:182] provisioning hostname "ha-148567-m02"
	I1217 20:57:23.835352  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:23.865477  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:23.865786  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1217 20:57:23.865799  568189 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-148567-m02 && echo "ha-148567-m02" | sudo tee /etc/hostname
	I1217 20:57:24.081116  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567-m02
	
	I1217 20:57:24.081197  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:24.137190  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:24.137506  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1217 20:57:24.137528  568189 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-148567-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-148567-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-148567-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:57:24.316986  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:57:24.317016  568189 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:57:24.317033  568189 ubuntu.go:190] setting up certificates
	I1217 20:57:24.317049  568189 provision.go:84] configureAuth start
	I1217 20:57:24.317123  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m02
	I1217 20:57:24.367712  568189 provision.go:143] copyHostCerts
	I1217 20:57:24.367760  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:57:24.367793  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:57:24.367807  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:57:24.367891  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:57:24.367990  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:57:24.368036  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:57:24.368044  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:57:24.368085  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:57:24.368162  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:57:24.368206  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:57:24.368214  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:57:24.368237  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:57:24.368289  568189 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.ha-148567-m02 san=[127.0.0.1 192.168.49.3 ha-148567-m02 localhost minikube]
	I1217 20:57:24.734586  568189 provision.go:177] copyRemoteCerts
	I1217 20:57:24.734657  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:57:24.734700  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:24.752816  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m02/id_rsa Username:docker}
	I1217 20:57:24.861032  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 20:57:24.861096  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:57:24.885807  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 20:57:24.885871  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 20:57:24.909744  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 20:57:24.909802  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:57:24.940905  568189 provision.go:87] duration metric: took 623.841925ms to configureAuth
	I1217 20:57:24.940983  568189 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:57:24.941278  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:57:24.941438  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:24.973318  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:24.973626  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1217 20:57:24.973640  568189 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:57:25.394552  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:57:25.394616  568189 machine.go:97] duration metric: took 4.756897721s to provisionDockerMachine
	I1217 20:57:25.394644  568189 start.go:293] postStartSetup for "ha-148567-m02" (driver="docker")
	I1217 20:57:25.394675  568189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:57:25.394774  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:57:25.394857  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:25.413005  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m02/id_rsa Username:docker}
	I1217 20:57:25.507933  568189 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:57:25.511214  568189 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:57:25.511242  568189 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:57:25.511254  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:57:25.511331  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:57:25.511429  568189 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:57:25.511454  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /etc/ssl/certs/4884122.pem
	I1217 20:57:25.511595  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:57:25.519225  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:57:25.536498  568189 start.go:296] duration metric: took 141.821713ms for postStartSetup
	I1217 20:57:25.536594  568189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:57:25.536641  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:25.554701  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m02/id_rsa Username:docker}
	I1217 20:57:25.648875  568189 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:57:25.653908  568189 fix.go:56] duration metric: took 5.497641165s for fixHost
	I1217 20:57:25.653937  568189 start.go:83] releasing machines lock for "ha-148567-m02", held for 5.497692546s
	I1217 20:57:25.654030  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m02
	I1217 20:57:25.674474  568189 out.go:179] * Found network options:
	I1217 20:57:25.677239  568189 out.go:179]   - NO_PROXY=192.168.49.2
	W1217 20:57:25.680103  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 20:57:25.680211  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:57:25.680260  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:57:25.680273  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:57:25.680302  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:57:25.680332  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:57:25.680360  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:57:25.680422  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:57:25.680464  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:25.680483  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:57:25.680501  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:57:25.680526  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:57:25.680594  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:25.699072  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m02/id_rsa Username:docker}
	I1217 20:57:25.806704  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:57:25.825127  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:57:25.843274  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:57:25.850408  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:57:25.858349  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:57:25.866386  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:57:25.870671  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:57:25.870754  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:57:25.912800  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:57:25.920578  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:57:25.928156  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:57:25.935802  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:57:25.939813  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:57:25.939893  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:57:25.984495  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:57:25.993961  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:26.008927  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:57:26.019188  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:26.024558  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:26.024680  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:26.082015  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:57:26.099109  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:57:26.105246  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	W1217 20:57:26.113304  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 20:57:26.113412  568189 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:57:26.113483  568189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:57:26.349285  568189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:57:26.356495  568189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:57:26.356569  568189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:57:26.369266  568189 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:57:26.369291  568189 start.go:496] detecting cgroup driver to use...
	I1217 20:57:26.369323  568189 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:57:26.369374  568189 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:57:26.391970  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:57:26.408218  568189 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:57:26.408282  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:57:26.433162  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:57:26.464579  568189 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:57:26.722421  568189 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:57:27.055410  568189 docker.go:234] disabling docker service ...
	I1217 20:57:27.055512  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:57:27.105418  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:57:27.136492  568189 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:57:27.498616  568189 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:57:27.849231  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:57:27.879943  568189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:57:27.940040  568189 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:57:27.940159  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:27.970284  568189 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:57:27.970406  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:27.993313  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:28.003134  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:28.018148  568189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:57:28.038773  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:28.082030  568189 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:28.095803  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:28.112015  568189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:57:28.129347  568189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:57:28.139870  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:57:28.466945  568189 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:58:58.759793  568189 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.292810635s)
	I1217 20:58:58.759820  568189 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:58:58.759888  568189 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:58:58.764083  568189 start.go:564] Will wait 60s for crictl version
	I1217 20:58:58.764156  568189 ssh_runner.go:195] Run: which crictl
	I1217 20:58:58.767972  568189 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:58:58.795899  568189 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:58:58.796007  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:58:58.827201  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:58:58.863057  568189 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 20:58:58.865958  568189 out.go:179]   - env NO_PROXY=192.168.49.2
	I1217 20:58:58.868926  568189 cli_runner.go:164] Run: docker network inspect ha-148567 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:58:58.886910  568189 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:58:58.891980  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:58:58.903686  568189 mustload.go:66] Loading cluster: ha-148567
	I1217 20:58:58.904009  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:58:58.904332  568189 cli_runner.go:164] Run: docker container inspect ha-148567 --format={{.State.Status}}
	I1217 20:58:58.922016  568189 host.go:66] Checking if "ha-148567" exists ...
	I1217 20:58:58.922335  568189 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567 for IP: 192.168.49.3
	I1217 20:58:58.922347  568189 certs.go:195] generating shared ca certs ...
	I1217 20:58:58.922361  568189 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:58:58.922470  568189 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:58:58.922522  568189 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:58:58.922529  568189 certs.go:257] generating profile certs ...
	I1217 20:58:58.922618  568189 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key
	I1217 20:58:58.922687  568189 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key.1961a769
	I1217 20:58:58.922732  568189 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key
	I1217 20:58:58.922741  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 20:58:58.922754  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 20:58:58.922765  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 20:58:58.922777  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 20:58:58.922787  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 20:58:58.922803  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 20:58:58.922815  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 20:58:58.922825  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 20:58:58.922873  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:58:58.922904  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:58:58.922923  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:58:58.922955  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:58:58.922983  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:58:58.923010  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:58:58.923089  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:58:58.923123  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:58:58.923147  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:58:58.923161  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:58:58.923214  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:58:58.940978  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567/id_rsa Username:docker}
	I1217 20:58:59.031917  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1217 20:58:59.036151  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1217 20:58:59.044650  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1217 20:58:59.048524  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1217 20:58:59.056890  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1217 20:58:59.061264  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1217 20:58:59.070225  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1217 20:58:59.074080  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1217 20:58:59.082761  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1217 20:58:59.086318  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1217 20:58:59.094905  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1217 20:58:59.098892  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1217 20:58:59.107797  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:58:59.130640  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:58:59.150337  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:58:59.170619  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:58:59.190148  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 20:58:59.207919  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:58:59.226715  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:58:59.255397  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:58:59.275249  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:58:59.296360  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:58:59.315496  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:58:59.335711  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1217 20:58:59.351659  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1217 20:58:59.365425  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1217 20:58:59.379095  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1217 20:58:59.403513  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1217 20:58:59.417385  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1217 20:58:59.430972  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1217 20:58:59.445861  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:58:59.452092  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:58:59.460052  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:58:59.467896  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:58:59.471905  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:58:59.472027  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:58:59.513981  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:58:59.521659  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:58:59.529706  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:58:59.537199  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:58:59.541310  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:58:59.541399  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:58:59.585446  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:58:59.592862  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:58:59.600234  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:58:59.608581  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:58:59.612452  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:58:59.612541  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:58:59.653344  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:58:59.661141  568189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:58:59.665238  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:58:59.706455  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:58:59.747808  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:58:59.789584  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:58:59.830635  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:58:59.871901  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:58:59.913067  568189 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.3 crio true true} ...
	I1217 20:58:59.913211  568189 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-148567-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:58:59.913253  568189 kube-vip.go:115] generating kube-vip config ...
	I1217 20:58:59.913314  568189 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 20:58:59.926579  568189 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:58:59.926690  568189 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 20:58:59.926836  568189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:58:59.934802  568189 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:58:59.934923  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1217 20:58:59.942778  568189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 20:58:59.955655  568189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:58:59.968160  568189 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 20:58:59.982401  568189 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 20:58:59.986001  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:58:59.995859  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:00.404474  568189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:59:00.421506  568189 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:59:00.421874  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:00.427429  568189 out.go:179] * Verifying Kubernetes components...
	I1217 20:59:00.430438  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:00.576754  568189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:59:00.591993  568189 kapi.go:59] client config for ha-148567: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1217 20:59:00.592071  568189 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1217 20:59:00.592328  568189 node_ready.go:35] waiting up to 6m0s for node "ha-148567-m02" to be "Ready" ...
	I1217 20:59:07.706331  568189 node_ready.go:49] node "ha-148567-m02" is "Ready"
	I1217 20:59:07.706358  568189 node_ready.go:38] duration metric: took 7.114006977s for node "ha-148567-m02" to be "Ready" ...
	I1217 20:59:07.706371  568189 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:59:07.706429  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:07.728020  568189 api_server.go:72] duration metric: took 7.306463101s to wait for apiserver process to appear ...
	I1217 20:59:07.728044  568189 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:59:07.728063  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:07.763283  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 20:59:07.763309  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 20:59:08.228746  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:08.252676  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:08.252767  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:08.728188  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:08.754073  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:08.754096  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:09.228723  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:09.239736  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:09.239818  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:09.728191  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:09.749211  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:09.749236  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:10.228802  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:10.249826  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:10.249920  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:10.728177  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:10.738376  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:10.738457  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:11.228738  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:11.237435  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:11.237473  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:11.728920  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:11.737210  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:11.737234  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:12.228685  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:12.257584  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:12.257614  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:12.728966  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:12.741760  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:12.741792  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:13.228213  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:13.237780  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:13.237819  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:13.728124  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:13.736267  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:13.736302  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:14.228758  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:14.248460  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:14.248488  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:14.728720  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:14.746850  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:14.746929  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:15.228174  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:15.243044  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:15.243106  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:15.728743  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:15.737606  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:15.737688  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:16.228734  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:16.237829  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:16.237870  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:16.728194  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:16.736702  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:16.736730  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:17.228177  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:17.237306  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1217 20:59:17.238535  568189 api_server.go:141] control plane version: v1.34.3
	I1217 20:59:17.238566  568189 api_server.go:131] duration metric: took 9.510515092s to wait for apiserver health ...
	I1217 20:59:17.238576  568189 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:59:17.244973  568189 system_pods.go:59] 26 kube-system pods found
	I1217 20:59:17.245011  568189 system_pods.go:61] "coredns-66bc5c9577-l8xqv" [e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db] Running
	I1217 20:59:17.245018  568189 system_pods.go:61] "coredns-66bc5c9577-wgcmx" [4fbbda83-a7d2-41c0-98ea-066d493cd483] Running
	I1217 20:59:17.245023  568189 system_pods.go:61] "etcd-ha-148567" [d54c6b05-3d02-4af3-a488-ffe69d55c8c3] Running
	I1217 20:59:17.245027  568189 system_pods.go:61] "etcd-ha-148567-m02" [574da258-f8b2-4230-93a1-028b2c5765bd] Running
	I1217 20:59:17.245031  568189 system_pods.go:61] "etcd-ha-148567-m03" [c1a60f08-76af-48b5-afe6-9389bc0fca8b] Running
	I1217 20:59:17.245034  568189 system_pods.go:61] "kindnet-4xxcs" [3950f031-7524-40a2-aa03-f50e754478ed] Running
	I1217 20:59:17.245038  568189 system_pods.go:61] "kindnet-88zsz" [22238c23-358a-49e1-82d6-ac88a03c654f] Running
	I1217 20:59:17.245042  568189 system_pods.go:61] "kindnet-gwspj" [f5b895d0-8eba-4923-9537-bd07bd57d3b5] Running
	I1217 20:59:17.245046  568189 system_pods.go:61] "kindnet-pv94f" [6135ada6-3e1f-4b1b-a2e2-014a1eb62772] Running
	I1217 20:59:17.245054  568189 system_pods.go:61] "kube-apiserver-ha-148567" [3b830c1a-94be-41e9-bc3a-725b97833055] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:59:17.245060  568189 system_pods.go:61] "kube-apiserver-ha-148567-m02" [0f0e1b1c-b98e-46cd-a683-ce654b6fdeb1] Running
	I1217 20:59:17.245070  568189 system_pods.go:61] "kube-apiserver-ha-148567-m03" [1be33e29-7193-43bb-aaf4-e48cbf50abf9] Running
	I1217 20:59:17.245078  568189 system_pods.go:61] "kube-controller-manager-ha-148567" [8baf857a-5cbe-4de6-9aba-7a244ab2fb08] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:59:17.245086  568189 system_pods.go:61] "kube-controller-manager-ha-148567-m02" [da8d38ea-86a0-4b97-9512-0541ac0d2d44] Running
	I1217 20:59:17.245090  568189 system_pods.go:61] "kube-controller-manager-ha-148567-m03" [360b1069-90d9-45be-a0c3-3c81c88154e0] Running
	I1217 20:59:17.245094  568189 system_pods.go:61] "kube-proxy-8nmpd" [52f8f6d6-8de7-4758-b35e-843a2a6ed562] Running
	I1217 20:59:17.245097  568189 system_pods.go:61] "kube-proxy-9n5cb" [775371d6-bc7d-40f6-8e0f-655f265828ba] Running
	I1217 20:59:17.245101  568189 system_pods.go:61] "kube-proxy-9rv8b" [7e160782-bab5-49b5-950c-377bde3bec7f] Running
	I1217 20:59:17.245109  568189 system_pods.go:61] "kube-proxy-cbk47" [149fd1d8-7762-49fd-81da-23047452dc4a] Running
	I1217 20:59:17.245113  568189 system_pods.go:61] "kube-scheduler-ha-148567" [a04a9bb2-7f73-4940-9952-049c4b406086] Running
	I1217 20:59:17.245124  568189 system_pods.go:61] "kube-scheduler-ha-148567-m02" [534b3ee9-d48c-450d-b20d-308e0a13a720] Running
	I1217 20:59:17.245128  568189 system_pods.go:61] "kube-scheduler-ha-148567-m03" [08f0c7d8-2adc-444e-a8db-5ad41340d101] Running
	I1217 20:59:17.245132  568189 system_pods.go:61] "kube-vip-ha-148567" [05c573d7-3cc9-4d90-8a3e-154ba3c7423a] Running
	I1217 20:59:17.245136  568189 system_pods.go:61] "kube-vip-ha-148567-m02" [4c4de02d-d57f-4a66-983e-12dbdb0f3521] Running
	I1217 20:59:17.245140  568189 system_pods.go:61] "kube-vip-ha-148567-m03" [37a516e3-0f3b-413f-be72-a445c095667f] Running
	I1217 20:59:17.245144  568189 system_pods.go:61] "storage-provisioner" [b613dd7f-cf83-4a6a-a6b8-c7b6282790ab] Running
	I1217 20:59:17.245153  568189 system_pods.go:74] duration metric: took 6.571369ms to wait for pod list to return data ...
	I1217 20:59:17.245166  568189 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:59:17.248376  568189 default_sa.go:45] found service account: "default"
	I1217 20:59:17.248403  568189 default_sa.go:55] duration metric: took 3.23112ms for default service account to be created ...
	I1217 20:59:17.248414  568189 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 20:59:17.254388  568189 system_pods.go:86] 26 kube-system pods found
	I1217 20:59:17.254429  568189 system_pods.go:89] "coredns-66bc5c9577-l8xqv" [e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db] Running
	I1217 20:59:17.254436  568189 system_pods.go:89] "coredns-66bc5c9577-wgcmx" [4fbbda83-a7d2-41c0-98ea-066d493cd483] Running
	I1217 20:59:17.254441  568189 system_pods.go:89] "etcd-ha-148567" [d54c6b05-3d02-4af3-a488-ffe69d55c8c3] Running
	I1217 20:59:17.254445  568189 system_pods.go:89] "etcd-ha-148567-m02" [574da258-f8b2-4230-93a1-028b2c5765bd] Running
	I1217 20:59:17.254450  568189 system_pods.go:89] "etcd-ha-148567-m03" [c1a60f08-76af-48b5-afe6-9389bc0fca8b] Running
	I1217 20:59:17.254454  568189 system_pods.go:89] "kindnet-4xxcs" [3950f031-7524-40a2-aa03-f50e754478ed] Running
	I1217 20:59:17.254458  568189 system_pods.go:89] "kindnet-88zsz" [22238c23-358a-49e1-82d6-ac88a03c654f] Running
	I1217 20:59:17.254464  568189 system_pods.go:89] "kindnet-gwspj" [f5b895d0-8eba-4923-9537-bd07bd57d3b5] Running
	I1217 20:59:17.254471  568189 system_pods.go:89] "kindnet-pv94f" [6135ada6-3e1f-4b1b-a2e2-014a1eb62772] Running
	I1217 20:59:17.254478  568189 system_pods.go:89] "kube-apiserver-ha-148567" [3b830c1a-94be-41e9-bc3a-725b97833055] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:59:17.254487  568189 system_pods.go:89] "kube-apiserver-ha-148567-m02" [0f0e1b1c-b98e-46cd-a683-ce654b6fdeb1] Running
	I1217 20:59:17.254493  568189 system_pods.go:89] "kube-apiserver-ha-148567-m03" [1be33e29-7193-43bb-aaf4-e48cbf50abf9] Running
	I1217 20:59:17.254506  568189 system_pods.go:89] "kube-controller-manager-ha-148567" [8baf857a-5cbe-4de6-9aba-7a244ab2fb08] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:59:17.254511  568189 system_pods.go:89] "kube-controller-manager-ha-148567-m02" [da8d38ea-86a0-4b97-9512-0541ac0d2d44] Running
	I1217 20:59:17.254523  568189 system_pods.go:89] "kube-controller-manager-ha-148567-m03" [360b1069-90d9-45be-a0c3-3c81c88154e0] Running
	I1217 20:59:17.254527  568189 system_pods.go:89] "kube-proxy-8nmpd" [52f8f6d6-8de7-4758-b35e-843a2a6ed562] Running
	I1217 20:59:17.254531  568189 system_pods.go:89] "kube-proxy-9n5cb" [775371d6-bc7d-40f6-8e0f-655f265828ba] Running
	I1217 20:59:17.254535  568189 system_pods.go:89] "kube-proxy-9rv8b" [7e160782-bab5-49b5-950c-377bde3bec7f] Running
	I1217 20:59:17.254539  568189 system_pods.go:89] "kube-proxy-cbk47" [149fd1d8-7762-49fd-81da-23047452dc4a] Running
	I1217 20:59:17.254544  568189 system_pods.go:89] "kube-scheduler-ha-148567" [a04a9bb2-7f73-4940-9952-049c4b406086] Running
	I1217 20:59:17.254548  568189 system_pods.go:89] "kube-scheduler-ha-148567-m02" [534b3ee9-d48c-450d-b20d-308e0a13a720] Running
	I1217 20:59:17.254554  568189 system_pods.go:89] "kube-scheduler-ha-148567-m03" [08f0c7d8-2adc-444e-a8db-5ad41340d101] Running
	I1217 20:59:17.254558  568189 system_pods.go:89] "kube-vip-ha-148567" [05c573d7-3cc9-4d90-8a3e-154ba3c7423a] Running
	I1217 20:59:17.254564  568189 system_pods.go:89] "kube-vip-ha-148567-m02" [4c4de02d-d57f-4a66-983e-12dbdb0f3521] Running
	I1217 20:59:17.254568  568189 system_pods.go:89] "kube-vip-ha-148567-m03" [37a516e3-0f3b-413f-be72-a445c095667f] Running
	I1217 20:59:17.254574  568189 system_pods.go:89] "storage-provisioner" [b613dd7f-cf83-4a6a-a6b8-c7b6282790ab] Running
	I1217 20:59:17.254581  568189 system_pods.go:126] duration metric: took 6.162224ms to wait for k8s-apps to be running ...
	I1217 20:59:17.254602  568189 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 20:59:17.254663  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:59:17.268613  568189 system_svc.go:56] duration metric: took 13.999372ms WaitForService to wait for kubelet
	I1217 20:59:17.268642  568189 kubeadm.go:587] duration metric: took 16.847089867s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:59:17.268661  568189 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:59:17.272882  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:17.272914  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:17.272927  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:17.272933  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:17.272955  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:17.272965  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:17.272970  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:17.272974  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:17.272990  568189 node_conditions.go:105] duration metric: took 4.323407ms to run NodePressure ...
	I1217 20:59:17.273004  568189 start.go:242] waiting for startup goroutines ...
	I1217 20:59:17.273044  568189 start.go:256] writing updated cluster config ...
	I1217 20:59:17.276641  568189 out.go:203] 
	I1217 20:59:17.279823  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:17.279977  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:59:17.283346  568189 out.go:179] * Starting "ha-148567-m03" control-plane node in "ha-148567" cluster
	I1217 20:59:17.287005  568189 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:59:17.289900  568189 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:59:17.292694  568189 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:59:17.292719  568189 cache.go:65] Caching tarball of preloaded images
	I1217 20:59:17.292773  568189 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:59:17.292856  568189 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:59:17.292875  568189 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:59:17.293025  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:59:17.316772  568189 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:59:17.316795  568189 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:59:17.316808  568189 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:59:17.316834  568189 start.go:360] acquireMachinesLock for ha-148567-m03: {Name:mk79ac9edce64d0e8c2ded9c9074a2bd7d2b5d55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:59:17.316888  568189 start.go:364] duration metric: took 38.95µs to acquireMachinesLock for "ha-148567-m03"
	I1217 20:59:17.316913  568189 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:59:17.316918  568189 fix.go:54] fixHost starting: m03
	I1217 20:59:17.317283  568189 cli_runner.go:164] Run: docker container inspect ha-148567-m03 --format={{.State.Status}}
	I1217 20:59:17.334541  568189 fix.go:112] recreateIfNeeded on ha-148567-m03: state=Stopped err=<nil>
	W1217 20:59:17.334574  568189 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:59:17.337913  568189 out.go:252] * Restarting existing docker container for "ha-148567-m03" ...
	I1217 20:59:17.337998  568189 cli_runner.go:164] Run: docker start ha-148567-m03
	I1217 20:59:17.630601  568189 cli_runner.go:164] Run: docker container inspect ha-148567-m03 --format={{.State.Status}}
	I1217 20:59:17.661698  568189 kic.go:430] container "ha-148567-m03" state is running.
	I1217 20:59:17.662070  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m03
	I1217 20:59:17.697058  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:59:17.697290  568189 machine.go:94] provisionDockerMachine start ...
	I1217 20:59:17.697346  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:17.735501  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:17.735872  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1217 20:59:17.735883  568189 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:59:17.736599  568189 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33978->127.0.0.1:33218: read: connection reset by peer
	I1217 20:59:20.923505  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567-m03
	
	I1217 20:59:20.923622  568189 ubuntu.go:182] provisioning hostname "ha-148567-m03"
	I1217 20:59:20.923718  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:20.957211  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:20.957509  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1217 20:59:20.957520  568189 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-148567-m03 && echo "ha-148567-m03" | sudo tee /etc/hostname
	I1217 20:59:21.165423  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567-m03
	
	I1217 20:59:21.165574  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:21.192963  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:21.193292  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1217 20:59:21.193313  568189 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-148567-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-148567-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-148567-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:59:21.368432  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:59:21.368455  568189 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:59:21.368471  568189 ubuntu.go:190] setting up certificates
	I1217 20:59:21.368480  568189 provision.go:84] configureAuth start
	I1217 20:59:21.368545  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m03
	I1217 20:59:21.396285  568189 provision.go:143] copyHostCerts
	I1217 20:59:21.396333  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:59:21.396368  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:59:21.396381  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:59:21.396464  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:59:21.396552  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:59:21.396575  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:59:21.396586  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:59:21.396614  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:59:21.396662  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:59:21.396683  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:59:21.396693  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:59:21.396721  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:59:21.396774  568189 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.ha-148567-m03 san=[127.0.0.1 192.168.49.4 ha-148567-m03 localhost minikube]
	I1217 20:59:21.571429  568189 provision.go:177] copyRemoteCerts
	I1217 20:59:21.571550  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:59:21.571647  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:21.594363  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m03/id_rsa Username:docker}
	I1217 20:59:21.708000  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 20:59:21.708057  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:59:21.741918  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 20:59:21.741984  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:59:21.772491  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 20:59:21.772556  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 20:59:21.816467  568189 provision.go:87] duration metric: took 447.972227ms to configureAuth
	I1217 20:59:21.816545  568189 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:59:21.816837  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:21.816991  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:21.842199  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:21.842497  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1217 20:59:21.842510  568189 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:59:23.388796  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:59:23.388873  568189 machine.go:97] duration metric: took 5.691572483s to provisionDockerMachine
	I1217 20:59:23.388901  568189 start.go:293] postStartSetup for "ha-148567-m03" (driver="docker")
	I1217 20:59:23.388945  568189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:59:23.389048  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:59:23.389125  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:23.407539  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m03/id_rsa Username:docker}
	I1217 20:59:23.504717  568189 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:59:23.508445  568189 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:59:23.508475  568189 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:59:23.508497  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:59:23.508554  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:59:23.508641  568189 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:59:23.508652  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /etc/ssl/certs/4884122.pem
	I1217 20:59:23.508753  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:59:23.516893  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:59:23.537776  568189 start.go:296] duration metric: took 148.841829ms for postStartSetup
	I1217 20:59:23.537865  568189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:59:23.537922  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:23.556786  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m03/id_rsa Username:docker}
	I1217 20:59:23.652766  568189 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:59:23.658117  568189 fix.go:56] duration metric: took 6.341191994s for fixHost
	I1217 20:59:23.658141  568189 start.go:83] releasing machines lock for "ha-148567-m03", held for 6.341239765s
	I1217 20:59:23.658236  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m03
	I1217 20:59:23.679391  568189 out.go:179] * Found network options:
	I1217 20:59:23.682308  568189 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1217 20:59:23.685317  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 20:59:23.685349  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 20:59:23.685436  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:59:23.685484  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:59:23.685498  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:59:23.685532  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:59:23.685564  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:59:23.685595  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:59:23.685643  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:59:23.685680  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:59:23.685700  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:59:23.685712  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:23.685732  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:59:23.685785  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:23.704133  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m03/id_rsa Username:docker}
	I1217 20:59:23.825155  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:59:23.849401  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:59:23.873252  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:59:23.884717  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:59:23.894872  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:59:23.906983  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:59:23.912255  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:59:23.912326  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:59:23.985078  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:59:23.994724  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:59:24.026915  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:59:24.068192  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:59:24.080822  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:59:24.080947  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:59:24.182542  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:59:24.200285  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:24.222177  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:59:24.235700  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:24.244507  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:24.244617  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:24.320887  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:59:24.336685  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:59:24.350359  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	W1217 20:59:24.358402  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 20:59:24.358481  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 20:59:24.358586  568189 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:59:24.358716  568189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:59:24.592070  568189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:59:24.599441  568189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:59:24.599517  568189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:59:24.610713  568189 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:59:24.610738  568189 start.go:496] detecting cgroup driver to use...
	I1217 20:59:24.610768  568189 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:59:24.610821  568189 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:59:24.642252  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:59:24.667730  568189 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:59:24.667804  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:59:24.701389  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:59:24.736876  568189 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:59:25.009438  568189 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:59:25.297427  568189 docker.go:234] disabling docker service ...
	I1217 20:59:25.297496  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:59:25.322653  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:59:25.339124  568189 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:59:25.552070  568189 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:59:25.758562  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:59:25.777883  568189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:59:25.800345  568189 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:59:25.800419  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:25.816339  568189 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:59:25.816411  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:25.826969  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:25.836513  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:25.846534  568189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:59:25.856329  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:25.866346  568189 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:25.875696  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:25.885875  568189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:59:25.894536  568189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:59:25.903937  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:26.158009  568189 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:59:27.447640  568189 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.289596192s)
	I1217 20:59:27.447667  568189 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:59:27.447742  568189 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:59:27.451909  568189 start.go:564] Will wait 60s for crictl version
	I1217 20:59:27.452022  568189 ssh_runner.go:195] Run: which crictl
	I1217 20:59:27.455782  568189 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:59:27.480696  568189 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:59:27.480875  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:59:27.511380  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:59:27.545667  568189 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 20:59:27.548725  568189 out.go:179]   - env NO_PROXY=192.168.49.2
	I1217 20:59:27.551654  568189 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1217 20:59:27.554631  568189 cli_runner.go:164] Run: docker network inspect ha-148567 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:59:27.569507  568189 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:59:27.573575  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:59:27.583348  568189 mustload.go:66] Loading cluster: ha-148567
	I1217 20:59:27.583685  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:27.583957  568189 cli_runner.go:164] Run: docker container inspect ha-148567 --format={{.State.Status}}
	I1217 20:59:27.602103  568189 host.go:66] Checking if "ha-148567" exists ...
	I1217 20:59:27.603047  568189 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567 for IP: 192.168.49.4
	I1217 20:59:27.603066  568189 certs.go:195] generating shared ca certs ...
	I1217 20:59:27.603090  568189 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:59:27.603216  568189 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:59:27.603263  568189 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:59:27.603274  568189 certs.go:257] generating profile certs ...
	I1217 20:59:27.603376  568189 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key
	I1217 20:59:27.603463  568189 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key.3b1ba341
	I1217 20:59:27.603515  568189 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key
	I1217 20:59:27.603530  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 20:59:27.603543  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 20:59:27.603558  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 20:59:27.603572  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 20:59:27.603621  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 20:59:27.603634  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 20:59:27.603645  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 20:59:27.603655  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 20:59:27.603709  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:59:27.603744  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:59:27.603756  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:59:27.603782  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:59:27.603813  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:59:27.603839  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:59:27.603886  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:59:27.603922  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:27.603937  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:59:27.603948  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:59:27.604007  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:59:27.622811  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567/id_rsa Username:docker}
	I1217 20:59:27.711932  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1217 20:59:27.715648  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1217 20:59:27.723761  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1217 20:59:27.727209  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1217 20:59:27.735381  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1217 20:59:27.738998  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1217 20:59:27.747188  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1217 20:59:27.750785  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1217 20:59:27.758913  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1217 20:59:27.762427  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1217 20:59:27.770856  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1217 20:59:27.774347  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1217 20:59:27.782918  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:59:27.807233  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:59:27.825936  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:59:27.843705  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:59:27.863259  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 20:59:27.883764  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:59:27.904255  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:59:27.951575  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:59:27.979511  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:59:28.010041  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:59:28.032795  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:59:28.058120  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1217 20:59:28.072480  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1217 20:59:28.096660  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1217 20:59:28.111050  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1217 20:59:28.125599  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1217 20:59:28.139988  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1217 20:59:28.154668  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1217 20:59:28.168340  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:59:28.174792  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:28.182440  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:59:28.191221  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:28.195516  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:28.195766  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:28.244735  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:59:28.252179  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:59:28.259686  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:59:28.270202  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:59:28.274707  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:59:28.274826  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:59:28.316566  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:59:28.324532  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:59:28.331852  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:59:28.344147  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:59:28.349920  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:59:28.350026  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:59:28.397463  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:59:28.405538  568189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:59:28.409482  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:59:28.452939  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:59:28.494338  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:59:28.540466  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:59:28.582836  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:59:28.624131  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:59:28.667766  568189 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.3 crio true true} ...
	I1217 20:59:28.667874  568189 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-148567-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:59:28.667909  568189 kube-vip.go:115] generating kube-vip config ...
	I1217 20:59:28.667967  568189 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 20:59:28.681456  568189 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:59:28.681523  568189 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 20:59:28.681593  568189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:59:28.689896  568189 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:59:28.689971  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1217 20:59:28.697831  568189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 20:59:28.713126  568189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:59:28.729184  568189 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 20:59:28.745530  568189 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 20:59:28.749870  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:59:28.762032  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:28.899317  568189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:59:28.916505  568189 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:59:28.916882  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:28.921876  568189 out.go:179] * Verifying Kubernetes components...
	I1217 20:59:28.924845  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:29.067107  568189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:59:29.082388  568189 kapi.go:59] client config for ha-148567: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1217 20:59:29.082463  568189 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1217 20:59:29.082744  568189 node_ready.go:35] waiting up to 6m0s for node "ha-148567-m03" to be "Ready" ...
	I1217 20:59:29.086184  568189 node_ready.go:49] node "ha-148567-m03" is "Ready"
	I1217 20:59:29.086213  568189 node_ready.go:38] duration metric: took 3.444045ms for node "ha-148567-m03" to be "Ready" ...
	I1217 20:59:29.086226  568189 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:59:29.086308  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:29.587146  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:30.086424  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:30.587043  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:31.087307  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:31.587125  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:32.087199  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:32.586440  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:33.087014  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:33.587262  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:34.086776  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:34.586785  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:35.086598  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:35.587225  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:36.087060  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:36.587238  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:37.087356  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:37.586962  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:38.086425  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:38.587186  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:39.086440  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:39.587206  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:40.087337  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:40.586682  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:41.086960  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:41.587321  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:42.087299  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:42.587074  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:43.086416  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:43.100960  568189 api_server.go:72] duration metric: took 14.18440701s to wait for apiserver process to appear ...
	I1217 20:59:43.100982  568189 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:59:43.101000  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:43.111943  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1217 20:59:43.113605  568189 api_server.go:141] control plane version: v1.34.3
	I1217 20:59:43.113627  568189 api_server.go:131] duration metric: took 12.639438ms to wait for apiserver health ...
	I1217 20:59:43.113635  568189 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:59:43.122498  568189 system_pods.go:59] 26 kube-system pods found
	I1217 20:59:43.122587  568189 system_pods.go:61] "coredns-66bc5c9577-l8xqv" [e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db] Running
	I1217 20:59:43.122609  568189 system_pods.go:61] "coredns-66bc5c9577-wgcmx" [4fbbda83-a7d2-41c0-98ea-066d493cd483] Running
	I1217 20:59:43.122628  568189 system_pods.go:61] "etcd-ha-148567" [d54c6b05-3d02-4af3-a488-ffe69d55c8c3] Running
	I1217 20:59:43.122660  568189 system_pods.go:61] "etcd-ha-148567-m02" [574da258-f8b2-4230-93a1-028b2c5765bd] Running
	I1217 20:59:43.122680  568189 system_pods.go:61] "etcd-ha-148567-m03" [c1a60f08-76af-48b5-afe6-9389bc0fca8b] Running
	I1217 20:59:43.122700  568189 system_pods.go:61] "kindnet-4xxcs" [3950f031-7524-40a2-aa03-f50e754478ed] Running
	I1217 20:59:43.122719  568189 system_pods.go:61] "kindnet-88zsz" [22238c23-358a-49e1-82d6-ac88a03c654f] Running
	I1217 20:59:43.122747  568189 system_pods.go:61] "kindnet-gwspj" [f5b895d0-8eba-4923-9537-bd07bd57d3b5] Running
	I1217 20:59:43.122769  568189 system_pods.go:61] "kindnet-pv94f" [6135ada6-3e1f-4b1b-a2e2-014a1eb62772] Running
	I1217 20:59:43.122787  568189 system_pods.go:61] "kube-apiserver-ha-148567" [3b830c1a-94be-41e9-bc3a-725b97833055] Running
	I1217 20:59:43.122807  568189 system_pods.go:61] "kube-apiserver-ha-148567-m02" [0f0e1b1c-b98e-46cd-a683-ce654b6fdeb1] Running
	I1217 20:59:43.122827  568189 system_pods.go:61] "kube-apiserver-ha-148567-m03" [1be33e29-7193-43bb-aaf4-e48cbf50abf9] Running
	I1217 20:59:43.122857  568189 system_pods.go:61] "kube-controller-manager-ha-148567" [8baf857a-5cbe-4de6-9aba-7a244ab2fb08] Running
	I1217 20:59:43.122886  568189 system_pods.go:61] "kube-controller-manager-ha-148567-m02" [da8d38ea-86a0-4b97-9512-0541ac0d2d44] Running
	I1217 20:59:43.122906  568189 system_pods.go:61] "kube-controller-manager-ha-148567-m03" [360b1069-90d9-45be-a0c3-3c81c88154e0] Running
	I1217 20:59:43.122929  568189 system_pods.go:61] "kube-proxy-8nmpd" [52f8f6d6-8de7-4758-b35e-843a2a6ed562] Running
	I1217 20:59:43.122960  568189 system_pods.go:61] "kube-proxy-9n5cb" [775371d6-bc7d-40f6-8e0f-655f265828ba] Running
	I1217 20:59:43.122982  568189 system_pods.go:61] "kube-proxy-9rv8b" [7e160782-bab5-49b5-950c-377bde3bec7f] Running
	I1217 20:59:43.123002  568189 system_pods.go:61] "kube-proxy-cbk47" [149fd1d8-7762-49fd-81da-23047452dc4a] Running
	I1217 20:59:43.123021  568189 system_pods.go:61] "kube-scheduler-ha-148567" [a04a9bb2-7f73-4940-9952-049c4b406086] Running
	I1217 20:59:43.123040  568189 system_pods.go:61] "kube-scheduler-ha-148567-m02" [534b3ee9-d48c-450d-b20d-308e0a13a720] Running
	I1217 20:59:43.123071  568189 system_pods.go:61] "kube-scheduler-ha-148567-m03" [08f0c7d8-2adc-444e-a8db-5ad41340d101] Running
	I1217 20:59:43.123099  568189 system_pods.go:61] "kube-vip-ha-148567" [05c573d7-3cc9-4d90-8a3e-154ba3c7423a] Running
	I1217 20:59:43.123129  568189 system_pods.go:61] "kube-vip-ha-148567-m02" [4c4de02d-d57f-4a66-983e-12dbdb0f3521] Running
	I1217 20:59:43.123149  568189 system_pods.go:61] "kube-vip-ha-148567-m03" [37a516e3-0f3b-413f-be72-a445c095667f] Running
	I1217 20:59:43.123176  568189 system_pods.go:61] "storage-provisioner" [b613dd7f-cf83-4a6a-a6b8-c7b6282790ab] Running
	I1217 20:59:43.123204  568189 system_pods.go:74] duration metric: took 9.561362ms to wait for pod list to return data ...
	I1217 20:59:43.123228  568189 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:59:43.126857  568189 default_sa.go:45] found service account: "default"
	I1217 20:59:43.126922  568189 default_sa.go:55] duration metric: took 3.673226ms for default service account to be created ...
	I1217 20:59:43.126952  568189 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 20:59:43.134811  568189 system_pods.go:86] 26 kube-system pods found
	I1217 20:59:43.134893  568189 system_pods.go:89] "coredns-66bc5c9577-l8xqv" [e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db] Running
	I1217 20:59:43.134915  568189 system_pods.go:89] "coredns-66bc5c9577-wgcmx" [4fbbda83-a7d2-41c0-98ea-066d493cd483] Running
	I1217 20:59:43.134937  568189 system_pods.go:89] "etcd-ha-148567" [d54c6b05-3d02-4af3-a488-ffe69d55c8c3] Running
	I1217 20:59:43.134966  568189 system_pods.go:89] "etcd-ha-148567-m02" [574da258-f8b2-4230-93a1-028b2c5765bd] Running
	I1217 20:59:43.134990  568189 system_pods.go:89] "etcd-ha-148567-m03" [c1a60f08-76af-48b5-afe6-9389bc0fca8b] Running
	I1217 20:59:43.135010  568189 system_pods.go:89] "kindnet-4xxcs" [3950f031-7524-40a2-aa03-f50e754478ed] Running
	I1217 20:59:43.135031  568189 system_pods.go:89] "kindnet-88zsz" [22238c23-358a-49e1-82d6-ac88a03c654f] Running
	I1217 20:59:43.135052  568189 system_pods.go:89] "kindnet-gwspj" [f5b895d0-8eba-4923-9537-bd07bd57d3b5] Running
	I1217 20:59:43.135081  568189 system_pods.go:89] "kindnet-pv94f" [6135ada6-3e1f-4b1b-a2e2-014a1eb62772] Running
	I1217 20:59:43.135118  568189 system_pods.go:89] "kube-apiserver-ha-148567" [3b830c1a-94be-41e9-bc3a-725b97833055] Running
	I1217 20:59:43.135138  568189 system_pods.go:89] "kube-apiserver-ha-148567-m02" [0f0e1b1c-b98e-46cd-a683-ce654b6fdeb1] Running
	I1217 20:59:43.135160  568189 system_pods.go:89] "kube-apiserver-ha-148567-m03" [1be33e29-7193-43bb-aaf4-e48cbf50abf9] Running
	I1217 20:59:43.135194  568189 system_pods.go:89] "kube-controller-manager-ha-148567" [8baf857a-5cbe-4de6-9aba-7a244ab2fb08] Running
	I1217 20:59:43.135222  568189 system_pods.go:89] "kube-controller-manager-ha-148567-m02" [da8d38ea-86a0-4b97-9512-0541ac0d2d44] Running
	I1217 20:59:43.135243  568189 system_pods.go:89] "kube-controller-manager-ha-148567-m03" [360b1069-90d9-45be-a0c3-3c81c88154e0] Running
	I1217 20:59:43.135263  568189 system_pods.go:89] "kube-proxy-8nmpd" [52f8f6d6-8de7-4758-b35e-843a2a6ed562] Running
	I1217 20:59:43.135283  568189 system_pods.go:89] "kube-proxy-9n5cb" [775371d6-bc7d-40f6-8e0f-655f265828ba] Running
	I1217 20:59:43.135311  568189 system_pods.go:89] "kube-proxy-9rv8b" [7e160782-bab5-49b5-950c-377bde3bec7f] Running
	I1217 20:59:43.135338  568189 system_pods.go:89] "kube-proxy-cbk47" [149fd1d8-7762-49fd-81da-23047452dc4a] Running
	I1217 20:59:43.135357  568189 system_pods.go:89] "kube-scheduler-ha-148567" [a04a9bb2-7f73-4940-9952-049c4b406086] Running
	I1217 20:59:43.135375  568189 system_pods.go:89] "kube-scheduler-ha-148567-m02" [534b3ee9-d48c-450d-b20d-308e0a13a720] Running
	I1217 20:59:43.135394  568189 system_pods.go:89] "kube-scheduler-ha-148567-m03" [08f0c7d8-2adc-444e-a8db-5ad41340d101] Running
	I1217 20:59:43.135423  568189 system_pods.go:89] "kube-vip-ha-148567" [05c573d7-3cc9-4d90-8a3e-154ba3c7423a] Running
	I1217 20:59:43.135455  568189 system_pods.go:89] "kube-vip-ha-148567-m02" [4c4de02d-d57f-4a66-983e-12dbdb0f3521] Running
	I1217 20:59:43.135477  568189 system_pods.go:89] "kube-vip-ha-148567-m03" [37a516e3-0f3b-413f-be72-a445c095667f] Running
	I1217 20:59:43.135495  568189 system_pods.go:89] "storage-provisioner" [b613dd7f-cf83-4a6a-a6b8-c7b6282790ab] Running
	I1217 20:59:43.135529  568189 system_pods.go:126] duration metric: took 8.54658ms to wait for k8s-apps to be running ...
	I1217 20:59:43.135556  568189 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 20:59:43.135647  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:59:43.150029  568189 system_svc.go:56] duration metric: took 14.465953ms WaitForService to wait for kubelet
	I1217 20:59:43.150071  568189 kubeadm.go:587] duration metric: took 14.233522691s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:59:43.150090  568189 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:59:43.154561  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:43.154592  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:43.154613  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:43.154619  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:43.154624  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:43.154628  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:43.154641  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:43.154646  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:43.154651  568189 node_conditions.go:105] duration metric: took 4.555345ms to run NodePressure ...
	I1217 20:59:43.154681  568189 start.go:242] waiting for startup goroutines ...
	I1217 20:59:43.154709  568189 start.go:256] writing updated cluster config ...
	I1217 20:59:43.158527  568189 out.go:203] 
	I1217 20:59:43.161746  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:43.161871  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:59:43.165329  568189 out.go:179] * Starting "ha-148567-m04" worker node in "ha-148567" cluster
	I1217 20:59:43.168355  568189 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:59:43.171262  568189 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:59:43.174132  568189 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:59:43.174410  568189 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:59:43.174454  568189 cache.go:65] Caching tarball of preloaded images
	I1217 20:59:43.174570  568189 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:59:43.174613  568189 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:59:43.174766  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:59:43.198461  568189 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:59:43.198481  568189 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:59:43.198493  568189 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:59:43.198516  568189 start.go:360] acquireMachinesLock for ha-148567-m04: {Name:mk553b42915df9bd549a5c28a2faaee12bc3aaa4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:59:43.198572  568189 start.go:364] duration metric: took 34.134µs to acquireMachinesLock for "ha-148567-m04"
	I1217 20:59:43.198597  568189 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:59:43.198602  568189 fix.go:54] fixHost starting: m04
	I1217 20:59:43.198879  568189 cli_runner.go:164] Run: docker container inspect ha-148567-m04 --format={{.State.Status}}
	I1217 20:59:43.217750  568189 fix.go:112] recreateIfNeeded on ha-148567-m04: state=Stopped err=<nil>
	W1217 20:59:43.217781  568189 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:59:43.221013  568189 out.go:252] * Restarting existing docker container for "ha-148567-m04" ...
	I1217 20:59:43.221102  568189 cli_runner.go:164] Run: docker start ha-148567-m04
	I1217 20:59:43.516797  568189 cli_runner.go:164] Run: docker container inspect ha-148567-m04 --format={{.State.Status}}
	I1217 20:59:43.540017  568189 kic.go:430] container "ha-148567-m04" state is running.
	I1217 20:59:43.540568  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m04
	I1217 20:59:43.574859  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:59:43.575129  568189 machine.go:94] provisionDockerMachine start ...
	I1217 20:59:43.575199  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:43.606726  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:43.607040  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1217 20:59:43.607056  568189 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:59:43.607773  568189 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58242->127.0.0.1:33223: read: connection reset by peer
	I1217 20:59:46.803819  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567-m04
	
	I1217 20:59:46.803848  568189 ubuntu.go:182] provisioning hostname "ha-148567-m04"
	I1217 20:59:46.803941  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:46.836537  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:46.836852  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1217 20:59:46.836874  568189 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-148567-m04 && echo "ha-148567-m04" | sudo tee /etc/hostname
	I1217 20:59:47.026899  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567-m04
	
	I1217 20:59:47.027037  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:47.062751  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:47.063061  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1217 20:59:47.063082  568189 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-148567-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-148567-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-148567-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:59:47.256926  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:59:47.257018  568189 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:59:47.257283  568189 ubuntu.go:190] setting up certificates
	I1217 20:59:47.257314  568189 provision.go:84] configureAuth start
	I1217 20:59:47.257398  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m04
	I1217 20:59:47.295834  568189 provision.go:143] copyHostCerts
	I1217 20:59:47.295877  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:59:47.295912  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:59:47.295919  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:59:47.296003  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:59:47.296090  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:59:47.296108  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:59:47.296113  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:59:47.296139  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:59:47.296196  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:59:47.296215  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:59:47.296219  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:59:47.296250  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:59:47.296313  568189 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.ha-148567-m04 san=[127.0.0.1 192.168.49.5 ha-148567-m04 localhost minikube]
	I1217 20:59:47.379272  568189 provision.go:177] copyRemoteCerts
	I1217 20:59:47.379345  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:59:47.379394  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:47.403843  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m04/id_rsa Username:docker}
	I1217 20:59:47.518369  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 20:59:47.518441  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:59:47.576564  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 20:59:47.576687  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 20:59:47.604142  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 20:59:47.604201  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:59:47.631334  568189 provision.go:87] duration metric: took 373.991006ms to configureAuth
	I1217 20:59:47.631359  568189 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:59:47.631685  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:47.631793  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:47.657183  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:47.657502  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1217 20:59:47.657518  568189 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:59:48.158234  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:59:48.158306  568189 machine.go:97] duration metric: took 4.583160847s to provisionDockerMachine
	I1217 20:59:48.158332  568189 start.go:293] postStartSetup for "ha-148567-m04" (driver="docker")
	I1217 20:59:48.158359  568189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:59:48.158470  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:59:48.158549  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:48.182261  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m04/id_rsa Username:docker}
	I1217 20:59:48.298135  568189 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:59:48.311846  568189 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:59:48.311884  568189 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:59:48.311907  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:59:48.311974  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:59:48.312067  568189 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:59:48.312079  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /etc/ssl/certs/4884122.pem
	I1217 20:59:48.312200  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:59:48.329656  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:59:48.373531  568189 start.go:296] duration metric: took 215.167593ms for postStartSetup
	I1217 20:59:48.373663  568189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:59:48.373725  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:48.400005  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m04/id_rsa Username:docker}
	I1217 20:59:48.502218  568189 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:59:48.508483  568189 fix.go:56] duration metric: took 5.309874613s for fixHost
	I1217 20:59:48.508507  568189 start.go:83] releasing machines lock for "ha-148567-m04", held for 5.309926708s
	I1217 20:59:48.508573  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m04
	I1217 20:59:48.542166  568189 out.go:179] * Found network options:
	I1217 20:59:48.545031  568189 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1217 20:59:48.547822  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 20:59:48.547865  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 20:59:48.547882  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 20:59:48.547964  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:59:48.548007  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:59:48.548015  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:59:48.548043  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:59:48.548068  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:59:48.548092  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:59:48.548135  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:59:48.548169  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:59:48.548185  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:48.548196  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:59:48.548214  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:59:48.548266  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:48.578677  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m04/id_rsa Username:docker}
	I1217 20:59:48.719848  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:59:48.753882  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:59:48.792107  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:59:48.804085  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:59:48.816313  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:59:48.832761  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:59:48.840746  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:59:48.840863  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:59:48.902488  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:59:48.912364  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:59:48.923914  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:59:48.940092  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:59:48.947071  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:59:48.947150  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:59:49.021813  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:59:49.034659  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:49.053384  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:59:49.069859  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:49.077887  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:49.078004  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:49.137254  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:59:49.153091  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:59:49.159186  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	W1217 20:59:49.165011  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 20:59:49.165053  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 20:59:49.165063  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 20:59:49.165151  568189 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:59:49.165273  568189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:59:49.359347  568189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:59:49.368376  568189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:59:49.368491  568189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:59:49.391939  568189 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:59:49.392014  568189 start.go:496] detecting cgroup driver to use...
	I1217 20:59:49.392069  568189 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:59:49.392143  568189 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:59:49.427410  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:59:49.445092  568189 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:59:49.445199  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:59:49.463345  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:59:49.480078  568189 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:59:49.663757  568189 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:59:49.840193  568189 docker.go:234] disabling docker service ...
	I1217 20:59:49.840317  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:59:49.860557  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:59:49.877087  568189 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:59:50.055711  568189 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:59:50.231385  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:59:50.254028  568189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:59:50.285776  568189 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:59:50.285901  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:50.299125  568189 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:59:50.299249  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:50.308719  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:50.317674  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:50.326552  568189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:59:50.334774  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:50.343683  568189 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:50.357610  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:50.371978  568189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:59:50.381012  568189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:59:50.389890  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:50.573931  568189 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:59:50.817600  568189 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:59:50.817730  568189 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:59:50.823707  568189 start.go:564] Will wait 60s for crictl version
	I1217 20:59:50.823823  568189 ssh_runner.go:195] Run: which crictl
	I1217 20:59:50.829375  568189 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:59:50.907046  568189 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:59:50.907198  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:59:50.968526  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:59:51.022232  568189 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 20:59:51.025095  568189 out.go:179]   - env NO_PROXY=192.168.49.2
	I1217 20:59:51.028040  568189 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1217 20:59:51.031031  568189 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1217 20:59:51.033982  568189 cli_runner.go:164] Run: docker network inspect ha-148567 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:59:51.058290  568189 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:59:51.064756  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:59:51.084472  568189 mustload.go:66] Loading cluster: ha-148567
	I1217 20:59:51.084822  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:51.085173  568189 cli_runner.go:164] Run: docker container inspect ha-148567 --format={{.State.Status}}
	I1217 20:59:51.122113  568189 host.go:66] Checking if "ha-148567" exists ...
	I1217 20:59:51.122410  568189 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567 for IP: 192.168.49.5
	I1217 20:59:51.122425  568189 certs.go:195] generating shared ca certs ...
	I1217 20:59:51.122444  568189 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:59:51.122555  568189 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:59:51.122603  568189 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:59:51.122617  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 20:59:51.122638  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 20:59:51.122649  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 20:59:51.122665  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 20:59:51.122723  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:59:51.122759  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:59:51.122771  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:59:51.122798  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:59:51.122830  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:59:51.122855  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:59:51.122904  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:59:51.122943  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:59:51.122961  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:51.122973  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:59:51.122997  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:59:51.146685  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:59:51.175270  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:59:51.202157  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:59:51.226103  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:59:51.248874  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:59:51.269857  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:59:51.310997  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:59:51.319341  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:59:51.330020  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:59:51.339343  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:59:51.350841  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:59:51.350957  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:59:51.400605  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:59:51.414512  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:59:51.424023  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:59:51.432640  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:59:51.437401  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:59:51.437481  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:59:51.482765  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:59:51.491449  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:51.501741  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:59:51.515339  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:51.520544  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:51.520666  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:51.565528  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:59:51.574279  568189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:59:51.579195  568189 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 20:59:51.579288  568189 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.3  false true} ...
	I1217 20:59:51.579397  568189 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-148567-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:59:51.579514  568189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:59:51.588520  568189 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:59:51.588644  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1217 20:59:51.600506  568189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 20:59:51.617987  568189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:59:51.637341  568189 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 20:59:51.641707  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:59:51.653386  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:51.824077  568189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:59:51.843148  568189 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1217 20:59:51.843522  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:51.848815  568189 out.go:179] * Verifying Kubernetes components...
	I1217 20:59:51.852560  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:51.982897  568189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:59:52.000066  568189 kapi.go:59] client config for ha-148567: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1217 20:59:52.000192  568189 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1217 20:59:52.000451  568189 node_ready.go:35] waiting up to 6m0s for node "ha-148567-m04" to be "Ready" ...
	I1217 20:59:52.006183  568189 node_ready.go:49] node "ha-148567-m04" is "Ready"
	I1217 20:59:52.006239  568189 node_ready.go:38] duration metric: took 5.759781ms for node "ha-148567-m04" to be "Ready" ...
	I1217 20:59:52.006258  568189 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 20:59:52.006601  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:59:52.047225  568189 system_svc.go:56] duration metric: took 40.959365ms WaitForService to wait for kubelet
	I1217 20:59:52.047255  568189 kubeadm.go:587] duration metric: took 203.674646ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:59:52.047276  568189 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:59:52.051902  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:52.051946  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:52.051960  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:52.051980  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:52.051986  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:52.051991  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:52.052000  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:52.052005  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:52.052015  568189 node_conditions.go:105] duration metric: took 4.734079ms to run NodePressure ...
	I1217 20:59:52.052027  568189 start.go:242] waiting for startup goroutines ...
	I1217 20:59:52.052063  568189 start.go:256] writing updated cluster config ...
	I1217 20:59:52.052403  568189 ssh_runner.go:195] Run: rm -f paused
	I1217 20:59:52.057083  568189 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:59:52.057721  568189 kapi.go:59] client config for ha-148567: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:59:52.075282  568189 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-l8xqv" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:52.083372  568189 pod_ready.go:94] pod "coredns-66bc5c9577-l8xqv" is "Ready"
	I1217 20:59:52.083403  568189 pod_ready.go:86] duration metric: took 8.086341ms for pod "coredns-66bc5c9577-l8xqv" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:52.083414  568189 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wgcmx" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:52.104642  568189 pod_ready.go:94] pod "coredns-66bc5c9577-wgcmx" is "Ready"
	I1217 20:59:52.104676  568189 pod_ready.go:86] duration metric: took 21.254359ms for pod "coredns-66bc5c9577-wgcmx" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:52.108222  568189 pod_ready.go:83] waiting for pod "etcd-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:52.114067  568189 pod_ready.go:94] pod "etcd-ha-148567" is "Ready"
	I1217 20:59:52.114095  568189 pod_ready.go:86] duration metric: took 5.843992ms for pod "etcd-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:52.114104  568189 pod_ready.go:83] waiting for pod "etcd-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 20:59:54.121101  568189 pod_ready.go:104] pod "etcd-ha-148567-m02" is not "Ready", error: <nil>
	W1217 20:59:56.121594  568189 pod_ready.go:104] pod "etcd-ha-148567-m02" is not "Ready", error: <nil>
	I1217 20:59:58.129487  568189 pod_ready.go:94] pod "etcd-ha-148567-m02" is "Ready"
	I1217 20:59:58.129512  568189 pod_ready.go:86] duration metric: took 6.015400557s for pod "etcd-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.129523  568189 pod_ready.go:83] waiting for pod "etcd-ha-148567-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.142269  568189 pod_ready.go:94] pod "etcd-ha-148567-m03" is "Ready"
	I1217 20:59:58.142292  568189 pod_ready.go:86] duration metric: took 12.762885ms for pod "etcd-ha-148567-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.146453  568189 pod_ready.go:83] waiting for pod "kube-apiserver-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.164280  568189 pod_ready.go:94] pod "kube-apiserver-ha-148567" is "Ready"
	I1217 20:59:58.164356  568189 pod_ready.go:86] duration metric: took 17.878983ms for pod "kube-apiserver-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.164381  568189 pod_ready.go:83] waiting for pod "kube-apiserver-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.259174  568189 request.go:683] "Waited before sending request" delay="88.189794ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m02"
	I1217 20:59:58.268569  568189 pod_ready.go:94] pod "kube-apiserver-ha-148567-m02" is "Ready"
	I1217 20:59:58.268593  568189 pod_ready.go:86] duration metric: took 104.192931ms for pod "kube-apiserver-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.268603  568189 pod_ready.go:83] waiting for pod "kube-apiserver-ha-148567-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.458982  568189 request.go:683] "Waited before sending request" delay="190.303242ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-148567-m03"
	I1217 20:59:58.658315  568189 request.go:683] "Waited before sending request" delay="195.215539ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m03"
	I1217 20:59:58.661689  568189 pod_ready.go:94] pod "kube-apiserver-ha-148567-m03" is "Ready"
	I1217 20:59:58.661723  568189 pod_ready.go:86] duration metric: took 393.113399ms for pod "kube-apiserver-ha-148567-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.859073  568189 request.go:683] "Waited before sending request" delay="197.228659ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1217 20:59:58.863798  568189 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:59.059209  568189 request.go:683] "Waited before sending request" delay="195.315815ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-148567"
	I1217 20:59:59.258903  568189 request.go:683] "Waited before sending request" delay="196.340082ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567"
	I1217 20:59:59.265017  568189 pod_ready.go:94] pod "kube-controller-manager-ha-148567" is "Ready"
	I1217 20:59:59.265041  568189 pod_ready.go:86] duration metric: took 401.217693ms for pod "kube-controller-manager-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:59.265051  568189 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:59.458390  568189 request.go:683] "Waited before sending request" delay="193.253489ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-148567-m02"
	I1217 20:59:59.658551  568189 request.go:683] "Waited before sending request" delay="180.126333ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m02"
	I1217 20:59:59.662062  568189 pod_ready.go:94] pod "kube-controller-manager-ha-148567-m02" is "Ready"
	I1217 20:59:59.662093  568189 pod_ready.go:86] duration metric: took 397.034758ms for pod "kube-controller-manager-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:59.662104  568189 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-148567-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:59.858282  568189 request.go:683] "Waited before sending request" delay="196.102269ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-148567-m03"
	I1217 21:00:00.075408  568189 request.go:683] "Waited before sending request" delay="213.781913ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m03"
	I1217 21:00:00.089107  568189 pod_ready.go:94] pod "kube-controller-manager-ha-148567-m03" is "Ready"
	I1217 21:00:00.089136  568189 pod_ready.go:86] duration metric: took 427.024958ms for pod "kube-controller-manager-ha-148567-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:00.258516  568189 request.go:683] "Waited before sending request" delay="169.272025ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1217 21:00:00.322743  568189 pod_ready.go:83] waiting for pod "kube-proxy-8nmpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:00.459982  568189 request.go:683] "Waited before sending request" delay="137.098152ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8nmpd"
	I1217 21:00:00.701120  568189 pod_ready.go:94] pod "kube-proxy-8nmpd" is "Ready"
	I1217 21:00:00.701146  568189 pod_ready.go:86] duration metric: took 378.365284ms for pod "kube-proxy-8nmpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:00.701157  568189 pod_ready.go:83] waiting for pod "kube-proxy-9n5cb" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:00.858493  568189 request.go:683] "Waited before sending request" delay="157.248259ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9n5cb"
	I1217 21:00:01.058920  568189 request.go:683] "Waited before sending request" delay="150.537073ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567"
	I1217 21:00:01.068198  568189 pod_ready.go:94] pod "kube-proxy-9n5cb" is "Ready"
	I1217 21:00:01.068230  568189 pod_ready.go:86] duration metric: took 367.062133ms for pod "kube-proxy-9n5cb" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:01.068243  568189 pod_ready.go:83] waiting for pod "kube-proxy-9rv8b" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:01.262645  568189 request.go:683] "Waited before sending request" delay="194.315293ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9rv8b"
	I1217 21:00:01.458640  568189 request.go:683] "Waited before sending request" delay="153.080094ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m04"
	I1217 21:00:01.462978  568189 pod_ready.go:94] pod "kube-proxy-9rv8b" is "Ready"
	I1217 21:00:01.463012  568189 pod_ready.go:86] duration metric: took 394.75948ms for pod "kube-proxy-9rv8b" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:01.463024  568189 pod_ready.go:83] waiting for pod "kube-proxy-cbk47" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:01.658301  568189 request.go:683] "Waited before sending request" delay="195.184202ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cbk47"
	I1217 21:00:01.858277  568189 request.go:683] "Waited before sending request" delay="195.25946ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m02"
	I1217 21:00:01.862378  568189 pod_ready.go:94] pod "kube-proxy-cbk47" is "Ready"
	I1217 21:00:01.862409  568189 pod_ready.go:86] duration metric: took 399.37762ms for pod "kube-proxy-cbk47" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:02.058910  568189 request.go:683] "Waited before sending request" delay="196.359519ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1217 21:00:02.063347  568189 pod_ready.go:83] waiting for pod "kube-scheduler-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:02.258828  568189 request.go:683] "Waited before sending request" delay="195.344917ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-148567"
	I1217 21:00:02.458794  568189 request.go:683] "Waited before sending request" delay="192.303347ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567"
	I1217 21:00:02.462249  568189 pod_ready.go:94] pod "kube-scheduler-ha-148567" is "Ready"
	I1217 21:00:02.462330  568189 pod_ready.go:86] duration metric: took 398.949995ms for pod "kube-scheduler-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:02.462347  568189 pod_ready.go:83] waiting for pod "kube-scheduler-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:02.658751  568189 request.go:683] "Waited before sending request" delay="196.3297ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-148567-m02"
	I1217 21:00:02.858900  568189 request.go:683] "Waited before sending request" delay="196.191697ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m02"
	I1217 21:00:03.058800  568189 request.go:683] "Waited before sending request" delay="96.270325ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-148567-m02"
	I1217 21:00:03.258609  568189 request.go:683] "Waited before sending request" delay="196.310803ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m02"
	I1217 21:00:03.658820  568189 request.go:683] "Waited before sending request" delay="192.320766ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m02"
	I1217 21:00:04.059107  568189 request.go:683] "Waited before sending request" delay="91.269847ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m02"
	W1217 21:00:04.473348  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:06.969463  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:08.970426  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:11.469067  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:13.469840  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:15.969240  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:17.970193  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:20.472073  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:22.968559  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:24.969719  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:26.969862  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:29.470421  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:31.972330  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:34.469131  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:36.470941  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:38.970444  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:41.469557  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:43.469705  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:45.969149  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:47.969777  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:50.469751  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:52.969483  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:54.969568  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:57.468587  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:59.469765  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:01.470220  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:03.968803  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:05.969289  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:07.970839  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:10.469532  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:12.470536  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:14.968677  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:16.969870  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:19.469773  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:21.473506  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:23.970699  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:26.469423  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:28.470176  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:30.970041  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:33.468708  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:35.470792  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:37.470979  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:39.969393  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:41.971168  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:43.973569  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:46.469101  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:48.469649  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:50.469830  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:52.969858  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:55.468819  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:57.469502  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:59.473027  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:01.969273  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:03.970006  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:06.469903  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:08.470528  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:10.969500  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:12.969708  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:15.469498  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:17.969560  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:20.471040  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:22.970398  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:25.470111  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:27.969892  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:30.470124  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:32.969858  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:34.970684  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:36.970849  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:39.468689  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:41.469503  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:43.969114  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:45.969652  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:47.970284  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:50.469486  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:52.469974  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:54.470624  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:56.969815  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:59.469488  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:01.469627  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:03.970512  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:06.469961  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:08.969174  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:10.969626  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:12.970730  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:15.469047  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:17.470130  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:19.473448  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:21.969933  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:23.970894  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:26.470713  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:28.968830  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:30.970218  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:33.468960  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:35.469770  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:37.968748  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:39.968975  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:41.969305  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:44.468880  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:46.469851  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:48.968886  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:50.969624  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	I1217 21:03:52.057311  568189 pod_ready.go:86] duration metric: took 3m49.59494638s for pod "kube-scheduler-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 21:03:52.057351  568189 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-scheduler" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1217 21:03:52.057365  568189 pod_ready.go:40] duration metric: took 4m0.000201029s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 21:03:52.060383  568189 out.go:203] 
	W1217 21:03:52.063300  568189 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1217 21:03:52.066188  568189 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 20:59:39 ha-148567 crio[710]: time="2025-12-17T20:59:39.629560273Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9167353b-2bf3-479e-964a-74f0d40c8545 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:59:39 ha-148567 crio[710]: time="2025-12-17T20:59:39.630687753Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=4c1f126a-a7f4-4eb6-8471-96e15a4f4b97 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:59:39 ha-148567 crio[710]: time="2025-12-17T20:59:39.630831533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:59:39 ha-148567 crio[710]: time="2025-12-17T20:59:39.637333781Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:59:39 ha-148567 crio[710]: time="2025-12-17T20:59:39.637640641Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8a5b38e85e717e293c44b562e20cb9e6c498fea8bc90e344c95ff4782baf3677/merged/etc/passwd: no such file or directory"
	Dec 17 20:59:39 ha-148567 crio[710]: time="2025-12-17T20:59:39.637732916Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8a5b38e85e717e293c44b562e20cb9e6c498fea8bc90e344c95ff4782baf3677/merged/etc/group: no such file or directory"
	Dec 17 20:59:39 ha-148567 crio[710]: time="2025-12-17T20:59:39.638163716Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:59:39 ha-148567 crio[710]: time="2025-12-17T20:59:39.663985219Z" level=info msg="Created container 5b4cc9c722ee34aabaca591c96b1752871791b2d6c7d43442e7dd50f3ee524e3: kube-system/storage-provisioner/storage-provisioner" id=4c1f126a-a7f4-4eb6-8471-96e15a4f4b97 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:59:39 ha-148567 crio[710]: time="2025-12-17T20:59:39.666283281Z" level=info msg="Starting container: 5b4cc9c722ee34aabaca591c96b1752871791b2d6c7d43442e7dd50f3ee524e3" id=dfa56923-d43f-4222-845d-8de9ac088dd0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:59:39 ha-148567 crio[710]: time="2025-12-17T20:59:39.668658423Z" level=info msg="Started container" PID=1522 containerID=5b4cc9c722ee34aabaca591c96b1752871791b2d6c7d43442e7dd50f3ee524e3 description=kube-system/storage-provisioner/storage-provisioner id=dfa56923-d43f-4222-845d-8de9ac088dd0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4ef70d8ffc0a00de889fdcd244ebaeaece44bf36c2fbca9eaac20ddec8a9e090
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.371783983Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.376846411Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.376883285Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.376913004Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.394117485Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.394157485Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.394177596Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.412098163Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.412258182Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.412360066Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.424241463Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.424398651Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.424482041Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.438777079Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.438966218Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	5b4cc9c722ee3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Running             storage-provisioner       2                   4ef70d8ffc0a0       storage-provisioner                 kube-system
	c001c946de439       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   1                   929945e3d1a3e       coredns-66bc5c9577-wgcmx            kube-system
	3d59d54580266       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   1                   bb9c16be2a1ae       coredns-66bc5c9577-l8xqv            kube-system
	9c2f443274791       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162   4 minutes ago       Running             kube-proxy                1                   45b30456d0d02       kube-proxy-9n5cb                    kube-system
	e05d2769fa75c       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   4 minutes ago       Running             busybox                   1                   48e89d3a90ce3       busybox-7b57f96db7-wpzp9            default
	36cc5a99d5800       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Exited              storage-provisioner       1                   4ef70d8ffc0a0       storage-provisioner                 kube-system
	be62aea7ae9e3       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   4 minutes ago       Running             kindnet-cni               1                   889abb571076e       kindnet-pv94f                       kube-system
	494e8522562ca       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22   4 minutes ago       Running             kube-controller-manager   4                   762c1badea8ef       kube-controller-manager-ha-148567   kube-system
	58f3a197004f5       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896   5 minutes ago       Running             kube-apiserver            2                   1572377e842c5       kube-apiserver-ha-148567            kube-system
	bf8c2f6823453       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22   5 minutes ago       Exited              kube-controller-manager   3                   762c1badea8ef       kube-controller-manager-ha-148567   kube-system
	7b48eea7424a1       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   6 minutes ago       Running             etcd                      1                   97798849c8ba9       etcd-ha-148567                      kube-system
	055c04d40b9a0       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6   6 minutes ago       Running             kube-scheduler            1                   778e1fadf4b3d       kube-scheduler-ha-148567            kube-system
	4f2a8a504377b       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   6 minutes ago       Running             kube-vip                  0                   c8f53dfc2b78e       kube-vip-ha-148567                  kube-system
	0273f065d6acf       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896   6 minutes ago       Exited              kube-apiserver            1                   1572377e842c5       kube-apiserver-ha-148567            kube-system
	
	
	==> coredns [3d59d545802667bca4afd18f76c3bf960baeb6a6cfa8136dd546f29b9af19a5f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53197 - 52993 "HINFO IN 5822912137944380976.4895998307528040920. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020505453s
	
	
	==> coredns [c001c946de4393f262b155b7097a5e53a29de886277d7d4f4b38fbec1514bf01] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49334 - 9860 "HINFO IN 3026609084912095735.2907426380693665954. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.042306577s
	
	
	==> describe nodes <==
	Name:               ha-148567
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-148567
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=ha-148567
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T20_52_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 20:52:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-148567
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 21:03:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 21:03:22 +0000   Wed, 17 Dec 2025 20:52:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 21:03:22 +0000   Wed, 17 Dec 2025 20:52:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 21:03:22 +0000   Wed, 17 Dec 2025 20:52:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 21:03:22 +0000   Wed, 17 Dec 2025 20:53:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-148567
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                c516dc0e-66c5-424a-98b8-b8a74ede6e3d
	  Boot ID:                    7e62835b-3b3c-4293-85c7-fb0aa46e4d00
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wpzp9             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m59s
	  kube-system                 coredns-66bc5c9577-l8xqv             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 coredns-66bc5c9577-wgcmx             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 etcd-ha-148567                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-pv94f                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-148567             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-148567    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-9n5cb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-148567             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-148567                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m41s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)      kubelet          Node ha-148567 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-148567 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-148567 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node ha-148567 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node ha-148567 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node ha-148567 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                    node-controller  Node ha-148567 event: Registered Node ha-148567 in Controller
	  Normal   NodeReady                10m                    kubelet          Node ha-148567 status is now: NodeReady
	  Normal   RegisteredNode           10m                    node-controller  Node ha-148567 event: Registered Node ha-148567 in Controller
	  Normal   RegisteredNode           9m26s                  node-controller  Node ha-148567 event: Registered Node ha-148567 in Controller
	  Normal   RegisteredNode           7m5s                   node-controller  Node ha-148567 event: Registered Node ha-148567 in Controller
	  Normal   NodeHasSufficientMemory  6m35s (x8 over 6m35s)  kubelet          Node ha-148567 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m35s (x8 over 6m35s)  kubelet          Node ha-148567 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m35s (x8 over 6m35s)  kubelet          Node ha-148567 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m40s                  node-controller  Node ha-148567 event: Registered Node ha-148567 in Controller
	  Normal   RegisteredNode           4m36s                  node-controller  Node ha-148567 event: Registered Node ha-148567 in Controller
	  Normal   RegisteredNode           3m32s                  node-controller  Node ha-148567 event: Registered Node ha-148567 in Controller
	
	
	Name:               ha-148567-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-148567-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=ha-148567
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_17T20_53_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 20:53:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-148567-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 21:03:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 21:02:05 +0000   Wed, 17 Dec 2025 20:56:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 21:02:05 +0000   Wed, 17 Dec 2025 20:56:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 21:02:05 +0000   Wed, 17 Dec 2025 20:56:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 21:02:05 +0000   Wed, 17 Dec 2025 20:56:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-148567-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                f1741bb6-47c6-431c-9bdb-b61180c553d3
	  Boot ID:                    7e62835b-3b3c-4293-85c7-fb0aa46e4d00
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-d5rt7                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m59s
	  kube-system                 etcd-ha-148567-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-gwspj                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-148567-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-148567-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-cbk47                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-148567-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-148567-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m55s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   CIDRAssignmentFailed     10m                    cidrAllocator    Node ha-148567-m02 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           10m                    node-controller  Node ha-148567-m02 event: Registered Node ha-148567-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-148567-m02 event: Registered Node ha-148567-m02 in Controller
	  Normal   RegisteredNode           9m26s                  node-controller  Node ha-148567-m02 event: Registered Node ha-148567-m02 in Controller
	  Normal   NodeHasSufficientMemory  7m43s (x8 over 7m43s)  kubelet          Node ha-148567-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     7m43s (x8 over 7m43s)  kubelet          Node ha-148567-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    7m43s (x8 over 7m43s)  kubelet          Node ha-148567-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 7m43s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m43s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeNotReady             7m12s                  node-controller  Node ha-148567-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           7m5s                   node-controller  Node ha-148567-m02 event: Registered Node ha-148567-m02 in Controller
	  Normal   Starting                 6m32s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m32s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m31s (x8 over 6m32s)  kubelet          Node ha-148567-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m31s (x8 over 6m32s)  kubelet          Node ha-148567-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m31s (x8 over 6m32s)  kubelet          Node ha-148567-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        5m32s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m40s                  node-controller  Node ha-148567-m02 event: Registered Node ha-148567-m02 in Controller
	  Normal   RegisteredNode           4m36s                  node-controller  Node ha-148567-m02 event: Registered Node ha-148567-m02 in Controller
	  Normal   RegisteredNode           3m32s                  node-controller  Node ha-148567-m02 event: Registered Node ha-148567-m02 in Controller
	
	
	Name:               ha-148567-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-148567-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=ha-148567
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_17T20_54_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 20:54:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-148567-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 21:03:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 21:03:52 +0000   Wed, 17 Dec 2025 20:54:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 21:03:52 +0000   Wed, 17 Dec 2025 20:54:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 21:03:52 +0000   Wed, 17 Dec 2025 20:54:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 21:03:52 +0000   Wed, 17 Dec 2025 20:54:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-148567-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                8000bbf5-0e22-446c-bb07-fb4fb4777e8a
	  Boot ID:                    7e62835b-3b3c-4293-85c7-fb0aa46e4d00
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-lc5vz                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m59s
	  kube-system                 etcd-ha-148567-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9m20s
	  kube-system                 kindnet-88zsz                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m25s
	  kube-system                 kube-apiserver-ha-148567-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-controller-manager-ha-148567-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-proxy-8nmpd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m25s
	  kube-system                 kube-scheduler-ha-148567-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-vip-ha-148567-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m19s                  kube-proxy       
	  Normal   Starting                 3m48s                  kube-proxy       
	  Normal   CIDRAssignmentFailed     9m25s                  cidrAllocator    Node ha-148567-m03 status is now: CIDRAssignmentFailed
	  Normal   CIDRAssignmentFailed     9m25s                  cidrAllocator    Node ha-148567-m03 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           9m24s                  node-controller  Node ha-148567-m03 event: Registered Node ha-148567-m03 in Controller
	  Normal   RegisteredNode           9m23s                  node-controller  Node ha-148567-m03 event: Registered Node ha-148567-m03 in Controller
	  Normal   RegisteredNode           9m21s                  node-controller  Node ha-148567-m03 event: Registered Node ha-148567-m03 in Controller
	  Normal   RegisteredNode           7m5s                   node-controller  Node ha-148567-m03 event: Registered Node ha-148567-m03 in Controller
	  Normal   RegisteredNode           4m40s                  node-controller  Node ha-148567-m03 event: Registered Node ha-148567-m03 in Controller
	  Normal   NodeHasSufficientMemory  4m36s (x8 over 4m36s)  kubelet          Node ha-148567-m03 status is now: NodeHasSufficientMemory
	  Normal   Starting                 4m36s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    4m36s (x8 over 4m36s)  kubelet          Node ha-148567-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m36s (x8 over 4m36s)  kubelet          Node ha-148567-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m36s                  node-controller  Node ha-148567-m03 event: Registered Node ha-148567-m03 in Controller
	  Normal   RegisteredNode           3m32s                  node-controller  Node ha-148567-m03 event: Registered Node ha-148567-m03 in Controller
	
	
	Name:               ha-148567-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-148567-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=ha-148567
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_17T20_55_18_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 20:55:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-148567-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 21:03:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 21:01:09 +0000   Wed, 17 Dec 2025 20:55:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 21:01:09 +0000   Wed, 17 Dec 2025 20:55:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 21:01:09 +0000   Wed, 17 Dec 2025 20:55:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 21:01:09 +0000   Wed, 17 Dec 2025 20:55:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-148567-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                3417648c-3c2b-4e8d-9266-3d162fe27a2f
	  Boot ID:                    7e62835b-3b3c-4293-85c7-fb0aa46e4d00
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4xxcs       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m37s
	  kube-system                 kube-proxy-9rv8b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m46s                  kube-proxy       
	  Normal   Starting                 8m34s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    8m37s (x3 over 8m37s)  kubelet          Node ha-148567-m04 status is now: NodeHasNoDiskPressure
	  Normal   CIDRAssignmentFailed     8m37s                  cidrAllocator    Node ha-148567-m04 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientPID     8m37s (x3 over 8m37s)  kubelet          Node ha-148567-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  8m37s (x3 over 8m37s)  kubelet          Node ha-148567-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           8m36s                  node-controller  Node ha-148567-m04 event: Registered Node ha-148567-m04 in Controller
	  Normal   RegisteredNode           8m34s                  node-controller  Node ha-148567-m04 event: Registered Node ha-148567-m04 in Controller
	  Normal   RegisteredNode           8m33s                  node-controller  Node ha-148567-m04 event: Registered Node ha-148567-m04 in Controller
	  Normal   NodeReady                8m22s                  kubelet          Node ha-148567-m04 status is now: NodeReady
	  Normal   RegisteredNode           7m5s                   node-controller  Node ha-148567-m04 event: Registered Node ha-148567-m04 in Controller
	  Normal   RegisteredNode           4m40s                  node-controller  Node ha-148567-m04 event: Registered Node ha-148567-m04 in Controller
	  Normal   RegisteredNode           4m36s                  node-controller  Node ha-148567-m04 event: Registered Node ha-148567-m04 in Controller
	  Normal   Starting                 4m10s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m10s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m6s (x8 over 4m10s)   kubelet          Node ha-148567-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m6s (x8 over 4m10s)   kubelet          Node ha-148567-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m6s (x8 over 4m10s)   kubelet          Node ha-148567-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m32s                  node-controller  Node ha-148567-m04 event: Registered Node ha-148567-m04 in Controller
	
	
	==> dmesg <==
	[Dec17 19:02] hrtimer: interrupt took 71042327 ns
	[Dec17 20:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 20:12] overlayfs: idmapped layers are currently not supported
	[  +0.080288] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec17 20:17] overlayfs: idmapped layers are currently not supported
	[Dec17 20:18] overlayfs: idmapped layers are currently not supported
	[Dec17 20:35] overlayfs: idmapped layers are currently not supported
	[Dec17 20:52] overlayfs: idmapped layers are currently not supported
	[Dec17 20:53] overlayfs: idmapped layers are currently not supported
	[Dec17 20:54] overlayfs: idmapped layers are currently not supported
	[Dec17 20:55] overlayfs: idmapped layers are currently not supported
	[Dec17 20:56] overlayfs: idmapped layers are currently not supported
	[Dec17 20:57] overlayfs: idmapped layers are currently not supported
	[  +3.685974] overlayfs: idmapped layers are currently not supported
	[Dec17 20:59] overlayfs: idmapped layers are currently not supported
	[Dec17 21:00] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7b48eea7424a1e799bb5102aad672e4089e73d5c20382c2df99a7acabddf99d2] <==
	{"level":"info","ts":"2025-12-17T20:59:25.127907Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"1206557d2b7140f9","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-17T20:59:25.128046Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"1206557d2b7140f9"}
	{"level":"warn","ts":"2025-12-17T20:59:25.128099Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9"}
	{"level":"info","ts":"2025-12-17T20:59:25.128133Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9"}
	{"level":"info","ts":"2025-12-17T20:59:25.149703Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"1206557d2b7140f9","stream-type":"stream Message"}
	{"level":"info","ts":"2025-12-17T20:59:25.149809Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9"}
	{"level":"info","ts":"2025-12-17T20:59:25.189707Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9"}
	{"level":"info","ts":"2025-12-17T20:59:25.198920Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9"}
	{"level":"warn","ts":"2025-12-17T20:59:26.185947Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:59:26.187544Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:59:26.228620Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"1206557d2b7140f9","error":"failed to dial 1206557d2b7140f9 on stream Message (EOF)"}
	{"level":"warn","ts":"2025-12-17T20:59:26.345948Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9"}
	{"level":"warn","ts":"2025-12-17T20:59:30.096031Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"1206557d2b7140f9","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T20:59:30.096622Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"1206557d2b7140f9","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T20:59:30.360781Z","caller":"rafthttp/stream.go:193","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9"}
	{"level":"warn","ts":"2025-12-17T20:59:34.099810Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"1206557d2b7140f9","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T20:59:34.099948Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"1206557d2b7140f9","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-12-17T20:59:37.788265Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"1206557d2b7140f9","stream-type":"stream Message"}
	{"level":"info","ts":"2025-12-17T20:59:37.788391Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"1206557d2b7140f9"}
	{"level":"info","ts":"2025-12-17T20:59:37.788431Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9"}
	{"level":"info","ts":"2025-12-17T20:59:37.796672Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"1206557d2b7140f9","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-17T20:59:37.796823Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9"}
	{"level":"info","ts":"2025-12-17T20:59:37.820571Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9"}
	{"level":"info","ts":"2025-12-17T20:59:37.821082Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9"}
	{"level":"info","ts":"2025-12-17T21:00:00.564875Z","caller":"traceutil/trace.go:172","msg":"trace[730122254] transaction","detail":"{read_only:false; response_revision:2088; number_of_response:1; }","duration":"130.097535ms","start":"2025-12-17T21:00:00.434758Z","end":"2025-12-17T21:00:00.564855Z","steps":["trace[730122254] 'process raft request'  (duration: 129.96804ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:03:54 up  3:46,  0 user,  load average: 0.91, 1.31, 1.15
	Linux ha-148567 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [be62aea7ae9e318fdcb21f246614d04dfc3cac7d3871e814ca132ac4ea1af8ab] <==
	I1217 21:03:19.375499       1 main.go:324] Node ha-148567-m04 has CIDR [10.244.4.0/24] 
	I1217 21:03:29.375444       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 21:03:29.375484       1 main.go:301] handling current node
	I1217 21:03:29.375500       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 21:03:29.375507       1 main.go:324] Node ha-148567-m02 has CIDR [10.244.1.0/24] 
	I1217 21:03:29.375724       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1217 21:03:29.375738       1 main.go:324] Node ha-148567-m03 has CIDR [10.244.2.0/24] 
	I1217 21:03:29.375796       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 21:03:29.375809       1 main.go:324] Node ha-148567-m04 has CIDR [10.244.4.0/24] 
	I1217 21:03:39.375800       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 21:03:39.375836       1 main.go:301] handling current node
	I1217 21:03:39.375853       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 21:03:39.375859       1 main.go:324] Node ha-148567-m02 has CIDR [10.244.1.0/24] 
	I1217 21:03:39.376034       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1217 21:03:39.376047       1 main.go:324] Node ha-148567-m03 has CIDR [10.244.2.0/24] 
	I1217 21:03:39.376110       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 21:03:39.376120       1 main.go:324] Node ha-148567-m04 has CIDR [10.244.4.0/24] 
	I1217 21:03:49.375345       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 21:03:49.375380       1 main.go:301] handling current node
	I1217 21:03:49.375396       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 21:03:49.375403       1 main.go:324] Node ha-148567-m02 has CIDR [10.244.1.0/24] 
	I1217 21:03:49.375629       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1217 21:03:49.375644       1 main.go:324] Node ha-148567-m03 has CIDR [10.244.2.0/24] 
	I1217 21:03:49.375716       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 21:03:49.375729       1 main.go:324] Node ha-148567-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [0273f065d6acfc2f5b1353496b1c10bb1409bb5cd6154db0859cb71f3d44d9a6] <==
	{"level":"warn","ts":"2025-12-17T20:58:24.564283Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021445a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564298Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400113da40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564332Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40027cb4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564368Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a52960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564388Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40025d0d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564401Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40020f1860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564436Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40015d2780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564457Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002438780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564472Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002438000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564505Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000b41c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564523Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001611c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564546Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400184a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564563Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40025d03c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564622Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002b6b860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564666Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002e82f00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:26.320355Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000b405a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	{"level":"warn","ts":"2025-12-17T20:58:28.792696Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001eb3860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	E1217 20:58:28.792871       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E1217 20:58:28.792953       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1217 20:58:28.794104       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1217 20:58:28.794148       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1217 20:58:28.795316       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.436444ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-148567" result=null
	F1217 20:58:29.196106       1 hooks.go:204] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	{"level":"warn","ts":"2025-12-17T20:58:29.337612Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000b405a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	{"level":"warn","ts":"2025-12-17T20:58:29.338203Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40020f05a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	
	
	==> kube-apiserver [58f3a197004f5d62632cc80af9bd747bbb630d2255db985a002dcb290b8fec26] <==
	I1217 20:59:07.961776       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 20:59:07.961822       1 aggregator.go:171] initial CRD sync complete...
	I1217 20:59:07.961836       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 20:59:07.961841       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 20:59:07.961854       1 cache.go:39] Caches are synced for autoregister controller
	I1217 20:59:07.968198       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 20:59:07.968812       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 20:59:07.968843       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 20:59:07.968940       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 20:59:07.972528       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 20:59:07.972554       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 20:59:07.981440       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 20:59:08.000665       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	W1217 20:59:08.036152       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1217 20:59:08.037731       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 20:59:08.072711       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1217 20:59:08.083854       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1217 20:59:08.480717       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 20:59:09.161299       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 20:59:09.161407       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 20:59:10.866944       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	W1217 20:59:16.897166       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1217 20:59:59.441759       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 20:59:59.499513       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 21:00:04.809392       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [494e8522562ca32d388131f40ce187010035d61cbc5d6ce5a865333dd850d94e] <==
	I1217 20:59:18.993226       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 20:59:18.999860       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 20:59:18.999948       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1217 20:59:19.004874       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 20:59:19.016373       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 20:59:19.016530       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 20:59:19.016388       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 20:59:19.016408       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 20:59:19.017528       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 20:59:19.017569       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1217 20:59:19.017632       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 20:59:19.017945       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1217 20:59:19.021092       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 20:59:19.025507       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 20:59:19.045724       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 20:59:19.045734       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 20:59:19.049978       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 20:59:19.050106       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 20:59:19.050216       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-148567-m04"
	I1217 20:59:19.053320       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 20:59:19.054927       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 20:59:19.059201       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 20:59:19.066898       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 20:59:19.066922       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 20:59:19.066930       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [bf8c2f6823453d081c96845039a6901183326d12bd63d0143e1c748f8411177a] <==
	I1217 20:58:23.691524       1 serving.go:386] Generated self-signed cert in-memory
	I1217 20:58:24.303529       1 controllermanager.go:191] "Starting" version="v1.34.3"
	I1217 20:58:24.303556       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:58:24.305033       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1217 20:58:24.305183       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1217 20:58:24.305513       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1217 20:58:24.305602       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1217 20:58:36.324375       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [9c2f443274791cbb739fa32684040efe768b281d3b40f0fdfa1ff15237e0485c] <==
	I1217 20:59:12.356083       1 server_linux.go:53] "Using iptables proxy"
	I1217 20:59:12.461390       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 20:59:12.562037       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 20:59:12.562172       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1217 20:59:12.562308       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 20:59:12.588852       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 20:59:12.588970       1 server_linux.go:132] "Using iptables Proxier"
	I1217 20:59:12.592749       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 20:59:12.593339       1 server.go:527] "Version info" version="v1.34.3"
	I1217 20:59:12.593422       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:59:12.597235       1 config.go:200] "Starting service config controller"
	I1217 20:59:12.597257       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 20:59:12.597270       1 config.go:106] "Starting endpoint slice config controller"
	I1217 20:59:12.597274       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 20:59:12.597299       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 20:59:12.597303       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 20:59:12.598036       1 config.go:309] "Starting node config controller"
	I1217 20:59:12.598091       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 20:59:12.598121       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 20:59:12.698129       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 20:59:12.698169       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 20:59:12.698129       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [055c04d40b9a0b3de2fc113e6e93106a29a67f711d7609c5bdc735d261688c9e] <==
	E1217 20:58:46.135841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 20:58:46.238975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 20:58:46.917370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 20:58:47.513192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 20:58:47.994962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 20:58:48.630524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 20:58:48.664886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 20:58:48.798812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 20:58:48.912518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 20:58:49.980501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 20:58:50.118478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 20:58:50.661797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 20:58:51.738458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 20:58:51.833081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 20:58:51.880446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 20:58:52.278361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 20:58:52.705253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 20:58:54.937084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1217 20:59:01.652210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 20:59:02.395235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 20:59:04.543403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 20:59:04.592198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 20:59:04.661603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 20:59:06.231393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 20:59:06.897693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	
	
	==> kubelet <==
	Dec 17 20:59:08 ha-148567 kubelet[848]: E1217 20:59:08.852661     848 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:08 ha-148567 kubelet[848]: E1217 20:59:08.852843     848 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/775371d6-bc7d-40f6-8e0f-655f265828ba-kube-proxy podName:775371d6-bc7d-40f6-8e0f-655f265828ba nodeName:}" failed. No retries permitted until 2025-12-17 20:59:09.352821731 +0000 UTC m=+110.333368718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/775371d6-bc7d-40f6-8e0f-655f265828ba-kube-proxy") pod "kube-proxy-9n5cb" (UID: "775371d6-bc7d-40f6-8e0f-655f265828ba") : failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:08 ha-148567 kubelet[848]: E1217 20:59:08.855111     848 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:08 ha-148567 kubelet[848]: E1217 20:59:08.855194     848 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4fbbda83-a7d2-41c0-98ea-066d493cd483-config-volume podName:4fbbda83-a7d2-41c0-98ea-066d493cd483 nodeName:}" failed. No retries permitted until 2025-12-17 20:59:09.355175868 +0000 UTC m=+110.335722855 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4fbbda83-a7d2-41c0-98ea-066d493cd483-config-volume") pod "coredns-66bc5c9577-wgcmx" (UID: "4fbbda83-a7d2-41c0-98ea-066d493cd483") : failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:08 ha-148567 kubelet[848]: W1217 20:59:08.873868     848 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08/crio-889abb571076ec1220928e45789d843337a05ee99ef9673a28c2a3c540b7021c WatchSource:0}: Error finding container 889abb571076ec1220928e45789d843337a05ee99ef9673a28c2a3c540b7021c: Status 404 returned error can't find the container with id 889abb571076ec1220928e45789d843337a05ee99ef9673a28c2a3c540b7021c
	Dec 17 20:59:09 ha-148567 kubelet[848]: W1217 20:59:09.068519     848 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08/crio-48e89d3a90ce3d775524a935bca2163af9eb88b51ddefe58bdd80f6e131fc019 WatchSource:0}: Error finding container 48e89d3a90ce3d775524a935bca2163af9eb88b51ddefe58bdd80f6e131fc019: Status 404 returned error can't find the container with id 48e89d3a90ce3d775524a935bca2163af9eb88b51ddefe58bdd80f6e131fc019
	Dec 17 20:59:10 ha-148567 kubelet[848]: E1217 20:59:10.374815     848 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:10 ha-148567 kubelet[848]: E1217 20:59:10.375407     848 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/775371d6-bc7d-40f6-8e0f-655f265828ba-kube-proxy podName:775371d6-bc7d-40f6-8e0f-655f265828ba nodeName:}" failed. No retries permitted until 2025-12-17 20:59:11.375385528 +0000 UTC m=+112.355932507 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/775371d6-bc7d-40f6-8e0f-655f265828ba-kube-proxy") pod "kube-proxy-9n5cb" (UID: "775371d6-bc7d-40f6-8e0f-655f265828ba") : failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:10 ha-148567 kubelet[848]: E1217 20:59:10.375315     848 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:10 ha-148567 kubelet[848]: E1217 20:59:10.375700     848 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db-config-volume podName:e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db nodeName:}" failed. No retries permitted until 2025-12-17 20:59:11.375685929 +0000 UTC m=+112.356232908 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db-config-volume") pod "coredns-66bc5c9577-l8xqv" (UID: "e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db") : failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:10 ha-148567 kubelet[848]: E1217 20:59:10.375332     848 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:10 ha-148567 kubelet[848]: E1217 20:59:10.375880     848 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4fbbda83-a7d2-41c0-98ea-066d493cd483-config-volume podName:4fbbda83-a7d2-41c0-98ea-066d493cd483 nodeName:}" failed. No retries permitted until 2025-12-17 20:59:11.375869357 +0000 UTC m=+112.356416344 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4fbbda83-a7d2-41c0-98ea-066d493cd483-config-volume") pod "coredns-66bc5c9577-wgcmx" (UID: "4fbbda83-a7d2-41c0-98ea-066d493cd483") : failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:12 ha-148567 kubelet[848]: I1217 20:59:12.229115     848 kubelet.go:3203] "Trying to delete pod" pod="kube-system/kube-vip-ha-148567" podUID="21e88703-e2ca-4f7a-b29b-995460537681"
	Dec 17 20:59:12 ha-148567 kubelet[848]: I1217 20:59:12.264656     848 kubelet.go:3209] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-148567"
	Dec 17 20:59:12 ha-148567 kubelet[848]: I1217 20:59:12.264831     848 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-148567"
	Dec 17 20:59:12 ha-148567 kubelet[848]: E1217 20:59:12.388846     848 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:12 ha-148567 kubelet[848]: E1217 20:59:12.389130     848 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4fbbda83-a7d2-41c0-98ea-066d493cd483-config-volume podName:4fbbda83-a7d2-41c0-98ea-066d493cd483 nodeName:}" failed. No retries permitted until 2025-12-17 20:59:14.389106831 +0000 UTC m=+115.369653810 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4fbbda83-a7d2-41c0-98ea-066d493cd483-config-volume") pod "coredns-66bc5c9577-wgcmx" (UID: "4fbbda83-a7d2-41c0-98ea-066d493cd483") : failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:12 ha-148567 kubelet[848]: E1217 20:59:12.389030     848 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:12 ha-148567 kubelet[848]: E1217 20:59:12.389728     848 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db-config-volume podName:e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db nodeName:}" failed. No retries permitted until 2025-12-17 20:59:14.389710367 +0000 UTC m=+115.370257354 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db-config-volume") pod "coredns-66bc5c9577-l8xqv" (UID: "e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db") : failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:13 ha-148567 kubelet[848]: I1217 20:59:13.384524     848 kubelet.go:3203] "Trying to delete pod" pod="kube-system/kube-vip-ha-148567" podUID="21e88703-e2ca-4f7a-b29b-995460537681"
	Dec 17 20:59:14 ha-148567 kubelet[848]: W1217 20:59:14.589657     848 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08/crio-bb9c16be2a1aedd858d979ae15e9146ad064db8d985a6ceb59a72082bfd3a89a WatchSource:0}: Error finding container bb9c16be2a1aedd858d979ae15e9146ad064db8d985a6ceb59a72082bfd3a89a: Status 404 returned error can't find the container with id bb9c16be2a1aedd858d979ae15e9146ad064db8d985a6ceb59a72082bfd3a89a
	Dec 17 20:59:14 ha-148567 kubelet[848]: W1217 20:59:14.594356     848 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08/crio-929945e3d1a3e1046f86b0d613851b79f1be5ddf4950f580b22b0757f6bb7e06 WatchSource:0}: Error finding container 929945e3d1a3e1046f86b0d613851b79f1be5ddf4950f580b22b0757f6bb7e06: Status 404 returned error can't find the container with id 929945e3d1a3e1046f86b0d613851b79f1be5ddf4950f580b22b0757f6bb7e06
	Dec 17 20:59:15 ha-148567 kubelet[848]: I1217 20:59:15.598263     848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-148567" podStartSLOduration=3.598244931 podStartE2EDuration="3.598244931s" podCreationTimestamp="2025-12-17 20:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 20:59:15.579683206 +0000 UTC m=+116.560230185" watchObservedRunningTime="2025-12-17 20:59:15.598244931 +0000 UTC m=+116.578791910"
	Dec 17 20:59:19 ha-148567 kubelet[848]: E1217 20:59:19.254429     848 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/53294b9279ebf263ed0cda5812f1ad589db804f07fe163bc838196d6b45a0fcc/diff" to get inode usage: stat /var/lib/containers/storage/overlay/53294b9279ebf263ed0cda5812f1ad589db804f07fe163bc838196d6b45a0fcc/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-148567_275f9236d45449f9c15b78cd0e1552cb/kube-controller-manager/2.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-148567_275f9236d45449f9c15b78cd0e1552cb/kube-controller-manager/2.log: no such file or directory
	Dec 17 20:59:39 ha-148567 kubelet[848]: I1217 20:59:39.626838     848 scope.go:117] "RemoveContainer" containerID="36cc5a99d5800e41730be4a25115863b86a6455bd50f1d620bffa86d7a25ea3d"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-148567 -n ha-148567
helpers_test.go:270: (dbg) Run:  kubectl --context ha-148567 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (432.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-148567" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-148567\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-148567\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.3\",\"ClusterName\":\"ha-148567\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.3\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.3\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"reg
istry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticI
P\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-148567
helpers_test.go:244: (dbg) docker inspect ha-148567:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08",
	        "Created": "2025-12-17T20:52:31.092462673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 568318,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:57:11.629499973Z",
	            "FinishedAt": "2025-12-17T20:57:11.031529194Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08/hostname",
	        "HostsPath": "/var/lib/docker/containers/88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08/hosts",
	        "LogPath": "/var/lib/docker/containers/88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08/88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08-json.log",
	        "Name": "/ha-148567",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-148567:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-148567",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08",
	                "LowerDir": "/var/lib/docker/overlay2/8e673472741feb79114d21cdb1133ef1554ec9c1de94e1353239170ab1e99ffe-init/diff:/var/lib/docker/overlay2/c4759b344ecb109c83d66e4ef56c76903f1d1e597efb86ab2d2c7911b1130a8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e673472741feb79114d21cdb1133ef1554ec9c1de94e1353239170ab1e99ffe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e673472741feb79114d21cdb1133ef1554ec9c1de94e1353239170ab1e99ffe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e673472741feb79114d21cdb1133ef1554ec9c1de94e1353239170ab1e99ffe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-148567",
	                "Source": "/var/lib/docker/volumes/ha-148567/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-148567",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-148567",
	                "name.minikube.sigs.k8s.io": "ha-148567",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8bec05c72f8070026c65a234cc2234c9ed60a9d48a73ed7980f988d165d7313b",
	            "SandboxKey": "/var/run/docker/netns/8bec05c72f80",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33208"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33209"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33212"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33210"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33211"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-148567": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:1c:9e:71:58:20",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "254979ff9069c22f3a569b8e9b07ed4381f262395f3bf61c458fcf6159449939",
	                    "EndpointID": "557d101a90f45ff33539072d9ea1e4592c6793c9d7ee55f08be852661aa35e13",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-148567",
	                        "88230c4afd3a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-148567 -n ha-148567
helpers_test.go:253: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-148567 logs -n 25: (1.545433293s)
helpers_test.go:261: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-148567 ssh -n ha-148567-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567-m02 sudo cat /home/docker/cp-test_ha-148567-m03_ha-148567-m02.txt                                         │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ cp      │ ha-148567 cp ha-148567-m03:/home/docker/cp-test.txt ha-148567-m04:/home/docker/cp-test_ha-148567-m03_ha-148567-m04.txt               │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567-m04 sudo cat /home/docker/cp-test_ha-148567-m03_ha-148567-m04.txt                                         │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ cp      │ ha-148567 cp testdata/cp-test.txt ha-148567-m04:/home/docker/cp-test.txt                                                             │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ cp      │ ha-148567 cp ha-148567-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3374009435/001/cp-test_ha-148567-m04.txt │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ cp      │ ha-148567 cp ha-148567-m04:/home/docker/cp-test.txt ha-148567:/home/docker/cp-test_ha-148567-m04_ha-148567.txt                       │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567 sudo cat /home/docker/cp-test_ha-148567-m04_ha-148567.txt                                                 │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ cp      │ ha-148567 cp ha-148567-m04:/home/docker/cp-test.txt ha-148567-m02:/home/docker/cp-test_ha-148567-m04_ha-148567-m02.txt               │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567-m02 sudo cat /home/docker/cp-test_ha-148567-m04_ha-148567-m02.txt                                         │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ cp      │ ha-148567 cp ha-148567-m04:/home/docker/cp-test.txt ha-148567-m03:/home/docker/cp-test_ha-148567-m04_ha-148567-m03.txt               │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ ssh     │ ha-148567 ssh -n ha-148567-m03 sudo cat /home/docker/cp-test_ha-148567-m04_ha-148567-m03.txt                                         │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:55 UTC │
	│ node    │ ha-148567 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:55 UTC │ 17 Dec 25 20:56 UTC │
	│ node    │ ha-148567 node start m02 --alsologtostderr -v 5                                                                                      │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:56 UTC │ 17 Dec 25 20:56 UTC │
	│ node    │ ha-148567 node list --alsologtostderr -v 5                                                                                           │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:56 UTC │                     │
	│ stop    │ ha-148567 stop --alsologtostderr -v 5                                                                                                │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:56 UTC │ 17 Dec 25 20:57 UTC │
	│ start   │ ha-148567 start --wait true --alsologtostderr -v 5                                                                                   │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 20:57 UTC │                     │
	│ node    │ ha-148567 node list --alsologtostderr -v 5                                                                                           │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 21:03 UTC │                     │
	│ node    │ ha-148567 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-148567 │ jenkins │ v1.37.0 │ 17 Dec 25 21:03 UTC │ 17 Dec 25 21:04 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:57:11
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:57:11.358859  568189 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:57:11.359079  568189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:57:11.359113  568189 out.go:374] Setting ErrFile to fd 2...
	I1217 20:57:11.359134  568189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:57:11.359399  568189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:57:11.359857  568189 out.go:368] Setting JSON to false
	I1217 20:57:11.360732  568189 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13181,"bootTime":1765991851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:57:11.360834  568189 start.go:143] virtualization:  
	I1217 20:57:11.366162  568189 out.go:179] * [ha-148567] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:57:11.369165  568189 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:57:11.369340  568189 notify.go:221] Checking for updates...
	I1217 20:57:11.372773  568189 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:57:11.376038  568189 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:57:11.378993  568189 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:57:11.381848  568189 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:57:11.384979  568189 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:57:11.388367  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:57:11.388514  568189 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:57:11.413210  568189 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:57:11.413329  568189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:57:11.470988  568189 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-17 20:57:11.461612355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:57:11.471099  568189 docker.go:319] overlay module found
	I1217 20:57:11.474237  568189 out.go:179] * Using the docker driver based on existing profile
	I1217 20:57:11.477144  568189 start.go:309] selected driver: docker
	I1217 20:57:11.477166  568189 start.go:927] validating driver "docker" against &{Name:ha-148567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:57:11.477308  568189 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:57:11.477418  568189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:57:11.541431  568189 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-17 20:57:11.532691865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:57:11.541848  568189 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:57:11.541879  568189 cni.go:84] Creating CNI manager for ""
	I1217 20:57:11.541937  568189 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1217 20:57:11.541988  568189 start.go:353] cluster config:
	{Name:ha-148567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:57:11.546865  568189 out.go:179] * Starting "ha-148567" primary control-plane node in "ha-148567" cluster
	I1217 20:57:11.549690  568189 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:57:11.552597  568189 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:57:11.555352  568189 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:57:11.555402  568189 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1217 20:57:11.555416  568189 cache.go:65] Caching tarball of preloaded images
	I1217 20:57:11.555437  568189 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:57:11.555506  568189 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:57:11.555517  568189 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:57:11.555734  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:57:11.574595  568189 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:57:11.574619  568189 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:57:11.574640  568189 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:57:11.574675  568189 start.go:360] acquireMachinesLock for ha-148567: {Name:mkeea083db7bee665ba841ae2b673f302d3ac8a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:57:11.574737  568189 start.go:364] duration metric: took 37.949µs to acquireMachinesLock for "ha-148567"
	I1217 20:57:11.574761  568189 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:57:11.574767  568189 fix.go:54] fixHost starting: 
	I1217 20:57:11.575046  568189 cli_runner.go:164] Run: docker container inspect ha-148567 --format={{.State.Status}}
	I1217 20:57:11.592879  568189 fix.go:112] recreateIfNeeded on ha-148567: state=Stopped err=<nil>
	W1217 20:57:11.592909  568189 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:57:11.596175  568189 out.go:252] * Restarting existing docker container for "ha-148567" ...
	I1217 20:57:11.596256  568189 cli_runner.go:164] Run: docker start ha-148567
	I1217 20:57:11.847065  568189 cli_runner.go:164] Run: docker container inspect ha-148567 --format={{.State.Status}}
	I1217 20:57:11.870185  568189 kic.go:430] container "ha-148567" state is running.
	I1217 20:57:11.870824  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567
	I1217 20:57:11.897361  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:57:11.897594  568189 machine.go:94] provisionDockerMachine start ...
	I1217 20:57:11.897659  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:11.920598  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:11.920937  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1217 20:57:11.920945  568189 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:57:11.923893  568189 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47488->127.0.0.1:33208: read: connection reset by peer
	I1217 20:57:15.067633  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567
	
	I1217 20:57:15.067656  568189 ubuntu.go:182] provisioning hostname "ha-148567"
	I1217 20:57:15.067737  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:15.086692  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:15.087056  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1217 20:57:15.087069  568189 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-148567 && echo "ha-148567" | sudo tee /etc/hostname
	I1217 20:57:15.229459  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567
	
	I1217 20:57:15.229547  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:15.248113  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:15.248429  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1217 20:57:15.248448  568189 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-148567' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-148567/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-148567' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:57:15.380233  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:57:15.380256  568189 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:57:15.380318  568189 ubuntu.go:190] setting up certificates
	I1217 20:57:15.380340  568189 provision.go:84] configureAuth start
	I1217 20:57:15.380427  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567
	I1217 20:57:15.398346  568189 provision.go:143] copyHostCerts
	I1217 20:57:15.398396  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:57:15.398436  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:57:15.398443  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:57:15.398519  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:57:15.398610  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:57:15.398628  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:57:15.398632  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:57:15.398658  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:57:15.398706  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:57:15.398722  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:57:15.398725  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:57:15.398748  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:57:15.398801  568189 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.ha-148567 san=[127.0.0.1 192.168.49.2 ha-148567 localhost minikube]
	I1217 20:57:16.169383  568189 provision.go:177] copyRemoteCerts
	I1217 20:57:16.169461  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:57:16.169502  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:16.187039  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567/id_rsa Username:docker}
	I1217 20:57:16.287499  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 20:57:16.287563  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:57:16.305548  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 20:57:16.305623  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1217 20:57:16.324256  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 20:57:16.324318  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:57:16.342494  568189 provision.go:87] duration metric: took 962.127276ms to configureAuth
	I1217 20:57:16.342522  568189 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:57:16.342771  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:57:16.342894  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:16.360548  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:16.360872  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1217 20:57:16.360886  568189 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:57:16.731877  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:57:16.731907  568189 machine.go:97] duration metric: took 4.834303602s to provisionDockerMachine
	I1217 20:57:16.731920  568189 start.go:293] postStartSetup for "ha-148567" (driver="docker")
	I1217 20:57:16.731930  568189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:57:16.732002  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:57:16.732081  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:16.754210  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567/id_rsa Username:docker}
	I1217 20:57:16.847793  568189 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:57:16.851353  568189 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:57:16.851380  568189 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:57:16.851393  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:57:16.851448  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:57:16.851530  568189 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:57:16.851537  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /etc/ssl/certs/4884122.pem
	I1217 20:57:16.851668  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:57:16.859497  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:57:16.877490  568189 start.go:296] duration metric: took 145.555245ms for postStartSetup
	I1217 20:57:16.877573  568189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:57:16.877619  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:16.895083  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567/id_rsa Username:docker}
	I1217 20:57:16.988718  568189 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:57:16.993912  568189 fix.go:56] duration metric: took 5.419138386s for fixHost
	I1217 20:57:16.993941  568189 start.go:83] releasing machines lock for "ha-148567", held for 5.419189965s
	I1217 20:57:16.994013  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567
	I1217 20:57:17.015130  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:57:17.015192  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:57:17.015202  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:57:17.015243  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:57:17.015276  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:57:17.015305  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:57:17.015359  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:57:17.015397  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:57:17.015413  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:57:17.015426  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:17.015449  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:57:17.015509  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:57:17.032525  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567/id_rsa Username:docker}
	I1217 20:57:17.141021  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:57:17.158347  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:57:17.175988  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:57:17.182704  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:57:17.190121  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:57:17.197484  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:57:17.201345  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:57:17.201430  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:57:17.242261  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:57:17.249871  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:17.257360  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:57:17.265311  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:17.269162  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:17.269230  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:17.310484  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:57:17.317908  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:57:17.325039  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:57:17.332443  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:57:17.336120  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:57:17.336229  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:57:17.377375  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:57:17.384997  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:57:17.388508  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 20:57:17.392154  568189 ssh_runner.go:195] Run: cat /version.json
	I1217 20:57:17.392265  568189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:57:17.396842  568189 ssh_runner.go:195] Run: systemctl --version
	I1217 20:57:17.490250  568189 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:57:17.526620  568189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:57:17.531388  568189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:57:17.531464  568189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:57:17.539341  568189 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:57:17.539367  568189 start.go:496] detecting cgroup driver to use...
	I1217 20:57:17.539398  568189 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:57:17.539448  568189 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:57:17.554515  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:57:17.567414  568189 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:57:17.567477  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:57:17.582837  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:57:17.596146  568189 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:57:17.711761  568189 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:57:17.824951  568189 docker.go:234] disabling docker service ...
	I1217 20:57:17.825056  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:57:17.839370  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:57:17.852221  568189 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:57:17.978299  568189 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:57:18.106183  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:57:18.119265  568189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:57:18.135218  568189 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:57:18.135286  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:18.144824  568189 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:57:18.144911  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:18.153531  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:18.162007  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:18.170781  568189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:57:18.178861  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:18.188770  568189 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:18.197027  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:18.205801  568189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:57:18.213338  568189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:57:18.220373  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:57:18.339982  568189 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:57:18.523093  568189 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:57:18.523169  568189 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:57:18.526796  568189 start.go:564] Will wait 60s for crictl version
	I1217 20:57:18.526868  568189 ssh_runner.go:195] Run: which crictl
	I1217 20:57:18.530299  568189 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:57:18.553630  568189 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:57:18.553755  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:57:18.582651  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:57:18.616862  568189 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 20:57:18.619814  568189 cli_runner.go:164] Run: docker network inspect ha-148567 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:57:18.635997  568189 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:57:18.639815  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:57:18.649441  568189 kubeadm.go:884] updating cluster {Name:ha-148567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:57:18.649590  568189 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:57:18.649659  568189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:57:18.684542  568189 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:57:18.684566  568189 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:57:18.684622  568189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:57:18.710185  568189 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:57:18.710209  568189 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:57:18.710218  568189 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1217 20:57:18.710314  568189 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-148567 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:57:18.710393  568189 ssh_runner.go:195] Run: crio config
	I1217 20:57:18.788945  568189 cni.go:84] Creating CNI manager for ""
	I1217 20:57:18.788969  568189 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1217 20:57:18.788980  568189 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:57:18.789006  568189 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-148567 NodeName:ha-148567 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:57:18.789146  568189 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-148567"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:57:18.789173  568189 kube-vip.go:115] generating kube-vip config ...
	I1217 20:57:18.789228  568189 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 20:57:18.801220  568189 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:57:18.801319  568189 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 20:57:18.801387  568189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:57:18.809265  568189 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:57:18.809341  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1217 20:57:18.816975  568189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1217 20:57:18.830189  568189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:57:18.843133  568189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1217 20:57:18.856384  568189 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 20:57:18.870226  568189 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 20:57:18.873999  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:57:18.883854  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:57:18.997472  568189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:57:19.014260  568189 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567 for IP: 192.168.49.2
	I1217 20:57:19.014282  568189 certs.go:195] generating shared ca certs ...
	I1217 20:57:19.014306  568189 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:57:19.014456  568189 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:57:19.014513  568189 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:57:19.014526  568189 certs.go:257] generating profile certs ...
	I1217 20:57:19.014605  568189 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key
	I1217 20:57:19.014640  568189 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key.235ee9c5
	I1217 20:57:19.014654  568189 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt.235ee9c5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1217 20:57:19.118946  568189 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt.235ee9c5 ...
	I1217 20:57:19.118983  568189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt.235ee9c5: {Name:mk1086942903d0f4fe5882a203e756f5bb8d0e75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:57:19.119164  568189 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key.235ee9c5 ...
	I1217 20:57:19.119181  568189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key.235ee9c5: {Name:mk80ca03d9af9f78d1f49f30dce3d5755dc5ecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:57:19.119259  568189 certs.go:382] copying /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt.235ee9c5 -> /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt
	I1217 20:57:19.119408  568189 certs.go:386] copying /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key.235ee9c5 -> /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key
	I1217 20:57:19.119551  568189 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key
	I1217 20:57:19.119572  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 20:57:19.120309  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 20:57:19.120337  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 20:57:19.120353  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 20:57:19.120372  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 20:57:19.120396  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 20:57:19.120412  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 20:57:19.120422  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 20:57:19.120480  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:57:19.120520  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:57:19.120532  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:57:19.120558  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:57:19.120587  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:57:19.120618  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:57:19.120667  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:57:19.120705  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:19.120722  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:57:19.120734  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:57:19.121259  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:57:19.145342  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:57:19.172402  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:57:19.199952  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:57:19.221869  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 20:57:19.249229  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:57:19.272832  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:57:19.291834  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:57:19.311373  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:57:19.330971  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:57:19.351692  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:57:19.371686  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:57:19.386168  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:57:19.392617  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:57:19.400115  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:57:19.407811  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:57:19.411925  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:57:19.411990  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:57:19.458050  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:57:19.465417  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:19.472749  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:57:19.480441  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:19.484121  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:19.484184  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:19.525126  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:57:19.532547  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:57:19.539760  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:57:19.547227  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:57:19.551344  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:57:19.551429  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:57:19.592800  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:57:19.600200  568189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:57:19.604024  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:57:19.651875  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:57:19.709469  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:57:19.756552  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:57:19.821907  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:57:19.909301  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:57:19.984018  568189 kubeadm.go:401] StartCluster: {Name:ha-148567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:57:19.984266  568189 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:57:19.984364  568189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:57:20.031672  568189 cri.go:89] found id: "023b1c530d5224ef13b091e8f631aeb894024192e8f5534cf29c773714cf0197"
	I1217 20:57:20.031748  568189 cri.go:89] found id: "7b48eea7424a1e799bb5102aad672e4089e73d5c20382c2df99a7acabddf99d2"
	I1217 20:57:20.031769  568189 cri.go:89] found id: "055c04d40b9a0b3de2fc113e6e93106a29a67f711d7609c5bdc735d261688c9e"
	I1217 20:57:20.031790  568189 cri.go:89] found id: "4f2a8a504377b01cbe43d291e9fa7cd514647d2cf31a4b90042b71653d4272df"
	I1217 20:57:20.031827  568189 cri.go:89] found id: "0273f065d6acfc2f5b1353496b1c10bb1409bb5cd6154db0859cb71f3d44d9a6"
	I1217 20:57:20.031852  568189 cri.go:89] found id: ""
	I1217 20:57:20.031944  568189 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 20:57:20.059831  568189 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:57:20Z" level=error msg="open /run/runc: no such file or directory"
	I1217 20:57:20.059961  568189 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:57:20.073057  568189 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 20:57:20.073134  568189 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 20:57:20.073239  568189 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 20:57:20.082317  568189 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:57:20.082916  568189 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-148567" does not appear in /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:57:20.083118  568189 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-485134/kubeconfig needs updating (will repair): [kubeconfig missing "ha-148567" cluster setting kubeconfig missing "ha-148567" context setting]
	I1217 20:57:20.083494  568189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/kubeconfig: {Name:mkc7214551fa855d6d4575d627c3ecd54291b351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:57:20.084509  568189 kapi.go:59] client config for ha-148567: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:57:20.085228  568189 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 20:57:20.085297  568189 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 20:57:20.085375  568189 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 20:57:20.085406  568189 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 20:57:20.085443  568189 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 20:57:20.085470  568189 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 20:57:20.085864  568189 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 20:57:20.094780  568189 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 20:57:20.094863  568189 kubeadm.go:602] duration metric: took 21.689252ms to restartPrimaryControlPlane
	I1217 20:57:20.094889  568189 kubeadm.go:403] duration metric: took 110.88ms to StartCluster
	I1217 20:57:20.094935  568189 settings.go:142] acquiring lock: {Name:mk91fac3a58d91b836cd0701183ef7cc1e571672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:57:20.095035  568189 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:57:20.095784  568189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/kubeconfig: {Name:mkc7214551fa855d6d4575d627c3ecd54291b351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:57:20.096075  568189 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:57:20.096138  568189 start.go:242] waiting for startup goroutines ...
	I1217 20:57:20.096184  568189 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:57:20.097159  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:57:20.102228  568189 out.go:179] * Enabled addons: 
	I1217 20:57:20.105517  568189 addons.go:530] duration metric: took 9.330527ms for enable addons: enabled=[]
	I1217 20:57:20.105608  568189 start.go:247] waiting for cluster config update ...
	I1217 20:57:20.105634  568189 start.go:256] writing updated cluster config ...
	I1217 20:57:20.109046  568189 out.go:203] 
	I1217 20:57:20.112434  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:57:20.112620  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:57:20.116088  568189 out.go:179] * Starting "ha-148567-m02" control-plane node in "ha-148567" cluster
	I1217 20:57:20.119188  568189 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:57:20.122470  568189 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:57:20.125477  568189 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:57:20.125543  568189 cache.go:65] Caching tarball of preloaded images
	I1217 20:57:20.125698  568189 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:57:20.125733  568189 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:57:20.125911  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:57:20.126192  568189 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:57:20.156127  568189 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:57:20.156146  568189 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:57:20.156158  568189 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:57:20.156180  568189 start.go:360] acquireMachinesLock for ha-148567-m02: {Name:mka0efc876c4e4103c7b51199829a59495ed53d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:57:20.156236  568189 start.go:364] duration metric: took 37.022µs to acquireMachinesLock for "ha-148567-m02"
	I1217 20:57:20.156255  568189 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:57:20.156260  568189 fix.go:54] fixHost starting: m02
	I1217 20:57:20.156516  568189 cli_runner.go:164] Run: docker container inspect ha-148567-m02 --format={{.State.Status}}
	I1217 20:57:20.185826  568189 fix.go:112] recreateIfNeeded on ha-148567-m02: state=Stopped err=<nil>
	W1217 20:57:20.185852  568189 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:57:20.189334  568189 out.go:252] * Restarting existing docker container for "ha-148567-m02" ...
	I1217 20:57:20.189427  568189 cli_runner.go:164] Run: docker start ha-148567-m02
	I1217 20:57:20.580145  568189 cli_runner.go:164] Run: docker container inspect ha-148567-m02 --format={{.State.Status}}
	I1217 20:57:20.605573  568189 kic.go:430] container "ha-148567-m02" state is running.
	I1217 20:57:20.605996  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m02
	I1217 20:57:20.637469  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:57:20.637709  568189 machine.go:94] provisionDockerMachine start ...
	I1217 20:57:20.637776  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:20.666081  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:20.666435  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1217 20:57:20.666445  568189 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:57:20.667044  568189 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55606->127.0.0.1:33213: read: connection reset by peer
	I1217 20:57:23.835171  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567-m02
	
	I1217 20:57:23.835247  568189 ubuntu.go:182] provisioning hostname "ha-148567-m02"
	I1217 20:57:23.835352  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:23.865477  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:23.865786  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1217 20:57:23.865799  568189 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-148567-m02 && echo "ha-148567-m02" | sudo tee /etc/hostname
	I1217 20:57:24.081116  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567-m02
	
	I1217 20:57:24.081197  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:24.137190  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:24.137506  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1217 20:57:24.137528  568189 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-148567-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-148567-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-148567-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:57:24.316986  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:57:24.317016  568189 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:57:24.317033  568189 ubuntu.go:190] setting up certificates
	I1217 20:57:24.317049  568189 provision.go:84] configureAuth start
	I1217 20:57:24.317123  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m02
	I1217 20:57:24.367712  568189 provision.go:143] copyHostCerts
	I1217 20:57:24.367760  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:57:24.367793  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:57:24.367807  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:57:24.367891  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:57:24.367990  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:57:24.368036  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:57:24.368044  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:57:24.368085  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:57:24.368162  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:57:24.368206  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:57:24.368214  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:57:24.368237  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:57:24.368289  568189 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.ha-148567-m02 san=[127.0.0.1 192.168.49.3 ha-148567-m02 localhost minikube]
	I1217 20:57:24.734586  568189 provision.go:177] copyRemoteCerts
	I1217 20:57:24.734657  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:57:24.734700  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:24.752816  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m02/id_rsa Username:docker}
	I1217 20:57:24.861032  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 20:57:24.861096  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:57:24.885807  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 20:57:24.885871  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 20:57:24.909744  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 20:57:24.909802  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:57:24.940905  568189 provision.go:87] duration metric: took 623.841925ms to configureAuth
	I1217 20:57:24.940983  568189 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:57:24.941278  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:57:24.941438  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:24.973318  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:57:24.973626  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1217 20:57:24.973640  568189 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:57:25.394552  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:57:25.394616  568189 machine.go:97] duration metric: took 4.756897721s to provisionDockerMachine
	I1217 20:57:25.394644  568189 start.go:293] postStartSetup for "ha-148567-m02" (driver="docker")
	I1217 20:57:25.394675  568189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:57:25.394774  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:57:25.394857  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:25.413005  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m02/id_rsa Username:docker}
	I1217 20:57:25.507933  568189 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:57:25.511214  568189 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:57:25.511242  568189 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:57:25.511254  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:57:25.511331  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:57:25.511429  568189 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:57:25.511454  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /etc/ssl/certs/4884122.pem
	I1217 20:57:25.511595  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:57:25.519225  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:57:25.536498  568189 start.go:296] duration metric: took 141.821713ms for postStartSetup
	I1217 20:57:25.536594  568189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:57:25.536641  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:25.554701  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m02/id_rsa Username:docker}
	I1217 20:57:25.648875  568189 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:57:25.653908  568189 fix.go:56] duration metric: took 5.497641165s for fixHost
	I1217 20:57:25.653937  568189 start.go:83] releasing machines lock for "ha-148567-m02", held for 5.497692546s
	I1217 20:57:25.654030  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m02
	I1217 20:57:25.674474  568189 out.go:179] * Found network options:
	I1217 20:57:25.677239  568189 out.go:179]   - NO_PROXY=192.168.49.2
	W1217 20:57:25.680103  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 20:57:25.680211  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:57:25.680260  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:57:25.680273  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:57:25.680302  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:57:25.680332  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:57:25.680360  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:57:25.680422  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:57:25.680464  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:25.680483  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:57:25.680501  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:57:25.680526  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:57:25.680594  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m02
	I1217 20:57:25.699072  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m02/id_rsa Username:docker}
	I1217 20:57:25.806704  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:57:25.825127  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:57:25.843274  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:57:25.850408  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:57:25.858349  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:57:25.866386  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:57:25.870671  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:57:25.870754  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:57:25.912800  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:57:25.920578  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:57:25.928156  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:57:25.935802  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:57:25.939813  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:57:25.939893  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:57:25.984495  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:57:25.993961  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:26.008927  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:57:26.019188  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:26.024558  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:26.024680  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:57:26.082015  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:57:26.099109  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:57:26.105246  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	W1217 20:57:26.113304  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 20:57:26.113412  568189 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:57:26.113483  568189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:57:26.349285  568189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:57:26.356495  568189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:57:26.356569  568189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:57:26.369266  568189 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:57:26.369291  568189 start.go:496] detecting cgroup driver to use...
	I1217 20:57:26.369323  568189 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:57:26.369374  568189 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:57:26.391970  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:57:26.408218  568189 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:57:26.408282  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:57:26.433162  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:57:26.464579  568189 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:57:26.722421  568189 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:57:27.055410  568189 docker.go:234] disabling docker service ...
	I1217 20:57:27.055512  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:57:27.105418  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:57:27.136492  568189 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:57:27.498616  568189 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:57:27.849231  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:57:27.879943  568189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:57:27.940040  568189 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:57:27.940159  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:27.970284  568189 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:57:27.970406  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:27.993313  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:28.003134  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:28.018148  568189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:57:28.038773  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:28.082030  568189 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:28.095803  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:57:28.112015  568189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:57:28.129347  568189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:57:28.139870  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:57:28.466945  568189 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:58:58.759793  568189 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.292810635s)
	I1217 20:58:58.759820  568189 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:58:58.759888  568189 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:58:58.764083  568189 start.go:564] Will wait 60s for crictl version
	I1217 20:58:58.764156  568189 ssh_runner.go:195] Run: which crictl
	I1217 20:58:58.767972  568189 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:58:58.795899  568189 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:58:58.796007  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:58:58.827201  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:58:58.863057  568189 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 20:58:58.865958  568189 out.go:179]   - env NO_PROXY=192.168.49.2
	I1217 20:58:58.868926  568189 cli_runner.go:164] Run: docker network inspect ha-148567 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:58:58.886910  568189 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:58:58.891980  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:58:58.903686  568189 mustload.go:66] Loading cluster: ha-148567
	I1217 20:58:58.904009  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:58:58.904332  568189 cli_runner.go:164] Run: docker container inspect ha-148567 --format={{.State.Status}}
	I1217 20:58:58.922016  568189 host.go:66] Checking if "ha-148567" exists ...
	I1217 20:58:58.922335  568189 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567 for IP: 192.168.49.3
	I1217 20:58:58.922347  568189 certs.go:195] generating shared ca certs ...
	I1217 20:58:58.922361  568189 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:58:58.922470  568189 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:58:58.922522  568189 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:58:58.922529  568189 certs.go:257] generating profile certs ...
	I1217 20:58:58.922618  568189 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key
	I1217 20:58:58.922687  568189 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key.1961a769
	I1217 20:58:58.922732  568189 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key
	I1217 20:58:58.922741  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 20:58:58.922754  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 20:58:58.922765  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 20:58:58.922777  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 20:58:58.922787  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 20:58:58.922803  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 20:58:58.922815  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 20:58:58.922825  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 20:58:58.922873  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:58:58.922904  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:58:58.922923  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:58:58.922955  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:58:58.922983  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:58:58.923010  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:58:58.923089  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:58:58.923123  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:58:58.923147  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:58:58.923161  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:58:58.923214  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:58:58.940978  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567/id_rsa Username:docker}
	I1217 20:58:59.031917  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1217 20:58:59.036151  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1217 20:58:59.044650  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1217 20:58:59.048524  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1217 20:58:59.056890  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1217 20:58:59.061264  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1217 20:58:59.070225  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1217 20:58:59.074080  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1217 20:58:59.082761  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1217 20:58:59.086318  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1217 20:58:59.094905  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1217 20:58:59.098892  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1217 20:58:59.107797  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:58:59.130640  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:58:59.150337  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:58:59.170619  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:58:59.190148  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 20:58:59.207919  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:58:59.226715  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:58:59.255397  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:58:59.275249  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:58:59.296360  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:58:59.315496  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:58:59.335711  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1217 20:58:59.351659  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1217 20:58:59.365425  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1217 20:58:59.379095  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1217 20:58:59.403513  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1217 20:58:59.417385  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1217 20:58:59.430972  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1217 20:58:59.445861  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:58:59.452092  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:58:59.460052  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:58:59.467896  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:58:59.471905  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:58:59.472027  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:58:59.513981  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:58:59.521659  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:58:59.529706  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:58:59.537199  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:58:59.541310  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:58:59.541399  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:58:59.585446  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:58:59.592862  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:58:59.600234  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:58:59.608581  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:58:59.612452  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:58:59.612541  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:58:59.653344  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:58:59.661141  568189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:58:59.665238  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:58:59.706455  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:58:59.747808  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:58:59.789584  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:58:59.830635  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:58:59.871901  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:58:59.913067  568189 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.3 crio true true} ...
	I1217 20:58:59.913211  568189 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-148567-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:58:59.913253  568189 kube-vip.go:115] generating kube-vip config ...
	I1217 20:58:59.913314  568189 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 20:58:59.926579  568189 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:58:59.926690  568189 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 20:58:59.926836  568189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:58:59.934802  568189 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:58:59.934923  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1217 20:58:59.942778  568189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 20:58:59.955655  568189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:58:59.968160  568189 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 20:58:59.982401  568189 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 20:58:59.986001  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:58:59.995859  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:00.404474  568189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:59:00.421506  568189 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:59:00.421874  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:00.427429  568189 out.go:179] * Verifying Kubernetes components...
	I1217 20:59:00.430438  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:00.576754  568189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:59:00.591993  568189 kapi.go:59] client config for ha-148567: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1217 20:59:00.592071  568189 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1217 20:59:00.592328  568189 node_ready.go:35] waiting up to 6m0s for node "ha-148567-m02" to be "Ready" ...
	I1217 20:59:07.706331  568189 node_ready.go:49] node "ha-148567-m02" is "Ready"
	I1217 20:59:07.706358  568189 node_ready.go:38] duration metric: took 7.114006977s for node "ha-148567-m02" to be "Ready" ...
	I1217 20:59:07.706371  568189 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:59:07.706429  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:07.728020  568189 api_server.go:72] duration metric: took 7.306463101s to wait for apiserver process to appear ...
	I1217 20:59:07.728044  568189 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:59:07.728063  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:07.763283  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 20:59:07.763309  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 20:59:08.228746  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:08.252676  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:08.252767  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:08.728188  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:08.754073  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:08.754096  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:09.228723  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:09.239736  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:09.239818  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:09.728191  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:09.749211  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:09.749236  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:10.228802  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:10.249826  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:10.249920  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:10.728177  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:10.738376  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:10.738457  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:11.228738  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:11.237435  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:11.237473  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:11.728920  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:11.737210  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:11.737234  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:12.228685  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:12.257584  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:12.257614  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:12.728966  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:12.741760  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:12.741792  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:13.228213  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:13.237780  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:13.237819  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:13.728124  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:13.736267  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:13.736302  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:14.228758  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:14.248460  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:14.248488  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:14.728720  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:14.746850  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:14.746929  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:15.228174  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:15.243044  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:15.243106  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:15.728743  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:15.737606  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:15.737688  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:16.228734  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:16.237829  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:16.237870  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:16.728194  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:16.736702  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:59:16.736730  568189 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:59:17.228177  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:17.237306  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1217 20:59:17.238535  568189 api_server.go:141] control plane version: v1.34.3
	I1217 20:59:17.238566  568189 api_server.go:131] duration metric: took 9.510515092s to wait for apiserver health ...
	I1217 20:59:17.238576  568189 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:59:17.244973  568189 system_pods.go:59] 26 kube-system pods found
	I1217 20:59:17.245011  568189 system_pods.go:61] "coredns-66bc5c9577-l8xqv" [e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db] Running
	I1217 20:59:17.245018  568189 system_pods.go:61] "coredns-66bc5c9577-wgcmx" [4fbbda83-a7d2-41c0-98ea-066d493cd483] Running
	I1217 20:59:17.245023  568189 system_pods.go:61] "etcd-ha-148567" [d54c6b05-3d02-4af3-a488-ffe69d55c8c3] Running
	I1217 20:59:17.245027  568189 system_pods.go:61] "etcd-ha-148567-m02" [574da258-f8b2-4230-93a1-028b2c5765bd] Running
	I1217 20:59:17.245031  568189 system_pods.go:61] "etcd-ha-148567-m03" [c1a60f08-76af-48b5-afe6-9389bc0fca8b] Running
	I1217 20:59:17.245034  568189 system_pods.go:61] "kindnet-4xxcs" [3950f031-7524-40a2-aa03-f50e754478ed] Running
	I1217 20:59:17.245038  568189 system_pods.go:61] "kindnet-88zsz" [22238c23-358a-49e1-82d6-ac88a03c654f] Running
	I1217 20:59:17.245042  568189 system_pods.go:61] "kindnet-gwspj" [f5b895d0-8eba-4923-9537-bd07bd57d3b5] Running
	I1217 20:59:17.245046  568189 system_pods.go:61] "kindnet-pv94f" [6135ada6-3e1f-4b1b-a2e2-014a1eb62772] Running
	I1217 20:59:17.245054  568189 system_pods.go:61] "kube-apiserver-ha-148567" [3b830c1a-94be-41e9-bc3a-725b97833055] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:59:17.245060  568189 system_pods.go:61] "kube-apiserver-ha-148567-m02" [0f0e1b1c-b98e-46cd-a683-ce654b6fdeb1] Running
	I1217 20:59:17.245070  568189 system_pods.go:61] "kube-apiserver-ha-148567-m03" [1be33e29-7193-43bb-aaf4-e48cbf50abf9] Running
	I1217 20:59:17.245078  568189 system_pods.go:61] "kube-controller-manager-ha-148567" [8baf857a-5cbe-4de6-9aba-7a244ab2fb08] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:59:17.245086  568189 system_pods.go:61] "kube-controller-manager-ha-148567-m02" [da8d38ea-86a0-4b97-9512-0541ac0d2d44] Running
	I1217 20:59:17.245090  568189 system_pods.go:61] "kube-controller-manager-ha-148567-m03" [360b1069-90d9-45be-a0c3-3c81c88154e0] Running
	I1217 20:59:17.245094  568189 system_pods.go:61] "kube-proxy-8nmpd" [52f8f6d6-8de7-4758-b35e-843a2a6ed562] Running
	I1217 20:59:17.245097  568189 system_pods.go:61] "kube-proxy-9n5cb" [775371d6-bc7d-40f6-8e0f-655f265828ba] Running
	I1217 20:59:17.245101  568189 system_pods.go:61] "kube-proxy-9rv8b" [7e160782-bab5-49b5-950c-377bde3bec7f] Running
	I1217 20:59:17.245109  568189 system_pods.go:61] "kube-proxy-cbk47" [149fd1d8-7762-49fd-81da-23047452dc4a] Running
	I1217 20:59:17.245113  568189 system_pods.go:61] "kube-scheduler-ha-148567" [a04a9bb2-7f73-4940-9952-049c4b406086] Running
	I1217 20:59:17.245124  568189 system_pods.go:61] "kube-scheduler-ha-148567-m02" [534b3ee9-d48c-450d-b20d-308e0a13a720] Running
	I1217 20:59:17.245128  568189 system_pods.go:61] "kube-scheduler-ha-148567-m03" [08f0c7d8-2adc-444e-a8db-5ad41340d101] Running
	I1217 20:59:17.245132  568189 system_pods.go:61] "kube-vip-ha-148567" [05c573d7-3cc9-4d90-8a3e-154ba3c7423a] Running
	I1217 20:59:17.245136  568189 system_pods.go:61] "kube-vip-ha-148567-m02" [4c4de02d-d57f-4a66-983e-12dbdb0f3521] Running
	I1217 20:59:17.245140  568189 system_pods.go:61] "kube-vip-ha-148567-m03" [37a516e3-0f3b-413f-be72-a445c095667f] Running
	I1217 20:59:17.245144  568189 system_pods.go:61] "storage-provisioner" [b613dd7f-cf83-4a6a-a6b8-c7b6282790ab] Running
	I1217 20:59:17.245153  568189 system_pods.go:74] duration metric: took 6.571369ms to wait for pod list to return data ...
	I1217 20:59:17.245166  568189 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:59:17.248376  568189 default_sa.go:45] found service account: "default"
	I1217 20:59:17.248403  568189 default_sa.go:55] duration metric: took 3.23112ms for default service account to be created ...
	I1217 20:59:17.248414  568189 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 20:59:17.254388  568189 system_pods.go:86] 26 kube-system pods found
	I1217 20:59:17.254429  568189 system_pods.go:89] "coredns-66bc5c9577-l8xqv" [e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db] Running
	I1217 20:59:17.254436  568189 system_pods.go:89] "coredns-66bc5c9577-wgcmx" [4fbbda83-a7d2-41c0-98ea-066d493cd483] Running
	I1217 20:59:17.254441  568189 system_pods.go:89] "etcd-ha-148567" [d54c6b05-3d02-4af3-a488-ffe69d55c8c3] Running
	I1217 20:59:17.254445  568189 system_pods.go:89] "etcd-ha-148567-m02" [574da258-f8b2-4230-93a1-028b2c5765bd] Running
	I1217 20:59:17.254450  568189 system_pods.go:89] "etcd-ha-148567-m03" [c1a60f08-76af-48b5-afe6-9389bc0fca8b] Running
	I1217 20:59:17.254454  568189 system_pods.go:89] "kindnet-4xxcs" [3950f031-7524-40a2-aa03-f50e754478ed] Running
	I1217 20:59:17.254458  568189 system_pods.go:89] "kindnet-88zsz" [22238c23-358a-49e1-82d6-ac88a03c654f] Running
	I1217 20:59:17.254464  568189 system_pods.go:89] "kindnet-gwspj" [f5b895d0-8eba-4923-9537-bd07bd57d3b5] Running
	I1217 20:59:17.254471  568189 system_pods.go:89] "kindnet-pv94f" [6135ada6-3e1f-4b1b-a2e2-014a1eb62772] Running
	I1217 20:59:17.254478  568189 system_pods.go:89] "kube-apiserver-ha-148567" [3b830c1a-94be-41e9-bc3a-725b97833055] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:59:17.254487  568189 system_pods.go:89] "kube-apiserver-ha-148567-m02" [0f0e1b1c-b98e-46cd-a683-ce654b6fdeb1] Running
	I1217 20:59:17.254493  568189 system_pods.go:89] "kube-apiserver-ha-148567-m03" [1be33e29-7193-43bb-aaf4-e48cbf50abf9] Running
	I1217 20:59:17.254506  568189 system_pods.go:89] "kube-controller-manager-ha-148567" [8baf857a-5cbe-4de6-9aba-7a244ab2fb08] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:59:17.254511  568189 system_pods.go:89] "kube-controller-manager-ha-148567-m02" [da8d38ea-86a0-4b97-9512-0541ac0d2d44] Running
	I1217 20:59:17.254523  568189 system_pods.go:89] "kube-controller-manager-ha-148567-m03" [360b1069-90d9-45be-a0c3-3c81c88154e0] Running
	I1217 20:59:17.254527  568189 system_pods.go:89] "kube-proxy-8nmpd" [52f8f6d6-8de7-4758-b35e-843a2a6ed562] Running
	I1217 20:59:17.254531  568189 system_pods.go:89] "kube-proxy-9n5cb" [775371d6-bc7d-40f6-8e0f-655f265828ba] Running
	I1217 20:59:17.254535  568189 system_pods.go:89] "kube-proxy-9rv8b" [7e160782-bab5-49b5-950c-377bde3bec7f] Running
	I1217 20:59:17.254539  568189 system_pods.go:89] "kube-proxy-cbk47" [149fd1d8-7762-49fd-81da-23047452dc4a] Running
	I1217 20:59:17.254544  568189 system_pods.go:89] "kube-scheduler-ha-148567" [a04a9bb2-7f73-4940-9952-049c4b406086] Running
	I1217 20:59:17.254548  568189 system_pods.go:89] "kube-scheduler-ha-148567-m02" [534b3ee9-d48c-450d-b20d-308e0a13a720] Running
	I1217 20:59:17.254554  568189 system_pods.go:89] "kube-scheduler-ha-148567-m03" [08f0c7d8-2adc-444e-a8db-5ad41340d101] Running
	I1217 20:59:17.254558  568189 system_pods.go:89] "kube-vip-ha-148567" [05c573d7-3cc9-4d90-8a3e-154ba3c7423a] Running
	I1217 20:59:17.254564  568189 system_pods.go:89] "kube-vip-ha-148567-m02" [4c4de02d-d57f-4a66-983e-12dbdb0f3521] Running
	I1217 20:59:17.254568  568189 system_pods.go:89] "kube-vip-ha-148567-m03" [37a516e3-0f3b-413f-be72-a445c095667f] Running
	I1217 20:59:17.254574  568189 system_pods.go:89] "storage-provisioner" [b613dd7f-cf83-4a6a-a6b8-c7b6282790ab] Running
	I1217 20:59:17.254581  568189 system_pods.go:126] duration metric: took 6.162224ms to wait for k8s-apps to be running ...
	I1217 20:59:17.254602  568189 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 20:59:17.254663  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:59:17.268613  568189 system_svc.go:56] duration metric: took 13.999372ms WaitForService to wait for kubelet
	I1217 20:59:17.268642  568189 kubeadm.go:587] duration metric: took 16.847089867s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:59:17.268661  568189 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:59:17.272882  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:17.272914  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:17.272927  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:17.272933  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:17.272955  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:17.272965  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:17.272970  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:17.272974  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:17.272990  568189 node_conditions.go:105] duration metric: took 4.323407ms to run NodePressure ...
	I1217 20:59:17.273004  568189 start.go:242] waiting for startup goroutines ...
	I1217 20:59:17.273044  568189 start.go:256] writing updated cluster config ...
	I1217 20:59:17.276641  568189 out.go:203] 
	I1217 20:59:17.279823  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:17.279977  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:59:17.283346  568189 out.go:179] * Starting "ha-148567-m03" control-plane node in "ha-148567" cluster
	I1217 20:59:17.287005  568189 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:59:17.289900  568189 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:59:17.292694  568189 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:59:17.292719  568189 cache.go:65] Caching tarball of preloaded images
	I1217 20:59:17.292773  568189 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:59:17.292856  568189 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:59:17.292875  568189 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:59:17.293025  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:59:17.316772  568189 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:59:17.316795  568189 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:59:17.316808  568189 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:59:17.316834  568189 start.go:360] acquireMachinesLock for ha-148567-m03: {Name:mk79ac9edce64d0e8c2ded9c9074a2bd7d2b5d55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:59:17.316888  568189 start.go:364] duration metric: took 38.95µs to acquireMachinesLock for "ha-148567-m03"
	I1217 20:59:17.316913  568189 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:59:17.316918  568189 fix.go:54] fixHost starting: m03
	I1217 20:59:17.317283  568189 cli_runner.go:164] Run: docker container inspect ha-148567-m03 --format={{.State.Status}}
	I1217 20:59:17.334541  568189 fix.go:112] recreateIfNeeded on ha-148567-m03: state=Stopped err=<nil>
	W1217 20:59:17.334574  568189 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:59:17.337913  568189 out.go:252] * Restarting existing docker container for "ha-148567-m03" ...
	I1217 20:59:17.337998  568189 cli_runner.go:164] Run: docker start ha-148567-m03
	I1217 20:59:17.630601  568189 cli_runner.go:164] Run: docker container inspect ha-148567-m03 --format={{.State.Status}}
	I1217 20:59:17.661698  568189 kic.go:430] container "ha-148567-m03" state is running.
	I1217 20:59:17.662070  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m03
	I1217 20:59:17.697058  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:59:17.697290  568189 machine.go:94] provisionDockerMachine start ...
	I1217 20:59:17.697346  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:17.735501  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:17.735872  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1217 20:59:17.735883  568189 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:59:17.736599  568189 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33978->127.0.0.1:33218: read: connection reset by peer
	I1217 20:59:20.923505  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567-m03
	
	I1217 20:59:20.923622  568189 ubuntu.go:182] provisioning hostname "ha-148567-m03"
	I1217 20:59:20.923718  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:20.957211  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:20.957509  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1217 20:59:20.957520  568189 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-148567-m03 && echo "ha-148567-m03" | sudo tee /etc/hostname
	I1217 20:59:21.165423  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567-m03
	
	I1217 20:59:21.165574  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:21.192963  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:21.193292  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1217 20:59:21.193313  568189 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-148567-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-148567-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-148567-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:59:21.368432  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:59:21.368455  568189 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:59:21.368471  568189 ubuntu.go:190] setting up certificates
	I1217 20:59:21.368480  568189 provision.go:84] configureAuth start
	I1217 20:59:21.368545  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m03
	I1217 20:59:21.396285  568189 provision.go:143] copyHostCerts
	I1217 20:59:21.396333  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:59:21.396368  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:59:21.396381  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:59:21.396464  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:59:21.396552  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:59:21.396575  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:59:21.396586  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:59:21.396614  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:59:21.396662  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:59:21.396683  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:59:21.396693  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:59:21.396721  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:59:21.396774  568189 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.ha-148567-m03 san=[127.0.0.1 192.168.49.4 ha-148567-m03 localhost minikube]
	I1217 20:59:21.571429  568189 provision.go:177] copyRemoteCerts
	I1217 20:59:21.571550  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:59:21.571647  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:21.594363  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m03/id_rsa Username:docker}
	I1217 20:59:21.708000  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 20:59:21.708057  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:59:21.741918  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 20:59:21.741984  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:59:21.772491  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 20:59:21.772556  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 20:59:21.816467  568189 provision.go:87] duration metric: took 447.972227ms to configureAuth
	I1217 20:59:21.816545  568189 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:59:21.816837  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:21.816991  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:21.842199  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:21.842497  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1217 20:59:21.842510  568189 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:59:23.388796  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:59:23.388873  568189 machine.go:97] duration metric: took 5.691572483s to provisionDockerMachine
	I1217 20:59:23.388901  568189 start.go:293] postStartSetup for "ha-148567-m03" (driver="docker")
	I1217 20:59:23.388945  568189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:59:23.389048  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:59:23.389125  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:23.407539  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m03/id_rsa Username:docker}
	I1217 20:59:23.504717  568189 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:59:23.508445  568189 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:59:23.508475  568189 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:59:23.508497  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:59:23.508554  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:59:23.508641  568189 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:59:23.508652  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /etc/ssl/certs/4884122.pem
	I1217 20:59:23.508753  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:59:23.516893  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:59:23.537776  568189 start.go:296] duration metric: took 148.841829ms for postStartSetup
	I1217 20:59:23.537865  568189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:59:23.537922  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:23.556786  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m03/id_rsa Username:docker}
	I1217 20:59:23.652766  568189 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:59:23.658117  568189 fix.go:56] duration metric: took 6.341191994s for fixHost
	I1217 20:59:23.658141  568189 start.go:83] releasing machines lock for "ha-148567-m03", held for 6.341239765s
	I1217 20:59:23.658236  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m03
	I1217 20:59:23.679391  568189 out.go:179] * Found network options:
	I1217 20:59:23.682308  568189 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1217 20:59:23.685317  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 20:59:23.685349  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 20:59:23.685436  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:59:23.685484  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:59:23.685498  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:59:23.685532  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:59:23.685564  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:59:23.685595  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:59:23.685643  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:59:23.685680  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:59:23.685700  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:59:23.685712  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:23.685732  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:59:23.685785  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:59:23.704133  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m03/id_rsa Username:docker}
	I1217 20:59:23.825155  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:59:23.849401  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:59:23.873252  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:59:23.884717  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:59:23.894872  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:59:23.906983  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:59:23.912255  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:59:23.912326  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:59:23.985078  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:59:23.994724  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:59:24.026915  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:59:24.068192  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:59:24.080822  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:59:24.080947  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:59:24.182542  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:59:24.200285  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:24.222177  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:59:24.235700  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:24.244507  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:24.244617  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:24.320887  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:59:24.336685  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:59:24.350359  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	W1217 20:59:24.358402  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 20:59:24.358481  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 20:59:24.358586  568189 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:59:24.358716  568189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:59:24.592070  568189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:59:24.599441  568189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:59:24.599517  568189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:59:24.610713  568189 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:59:24.610738  568189 start.go:496] detecting cgroup driver to use...
	I1217 20:59:24.610768  568189 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:59:24.610821  568189 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:59:24.642252  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:59:24.667730  568189 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:59:24.667804  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:59:24.701389  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:59:24.736876  568189 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:59:25.009438  568189 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:59:25.297427  568189 docker.go:234] disabling docker service ...
	I1217 20:59:25.297496  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:59:25.322653  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:59:25.339124  568189 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:59:25.552070  568189 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:59:25.758562  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:59:25.777883  568189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:59:25.800345  568189 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:59:25.800419  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:25.816339  568189 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:59:25.816411  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:25.826969  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:25.836513  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:25.846534  568189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:59:25.856329  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:25.866346  568189 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:25.875696  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:25.885875  568189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:59:25.894536  568189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:59:25.903937  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:26.158009  568189 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:59:27.447640  568189 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.289596192s)
	I1217 20:59:27.447667  568189 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:59:27.447742  568189 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:59:27.451909  568189 start.go:564] Will wait 60s for crictl version
	I1217 20:59:27.452022  568189 ssh_runner.go:195] Run: which crictl
	I1217 20:59:27.455782  568189 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:59:27.480696  568189 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:59:27.480875  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:59:27.511380  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:59:27.545667  568189 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 20:59:27.548725  568189 out.go:179]   - env NO_PROXY=192.168.49.2
	I1217 20:59:27.551654  568189 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1217 20:59:27.554631  568189 cli_runner.go:164] Run: docker network inspect ha-148567 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:59:27.569507  568189 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:59:27.573575  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:59:27.583348  568189 mustload.go:66] Loading cluster: ha-148567
	I1217 20:59:27.583685  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:27.583957  568189 cli_runner.go:164] Run: docker container inspect ha-148567 --format={{.State.Status}}
	I1217 20:59:27.602103  568189 host.go:66] Checking if "ha-148567" exists ...
	I1217 20:59:27.603047  568189 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567 for IP: 192.168.49.4
	I1217 20:59:27.603066  568189 certs.go:195] generating shared ca certs ...
	I1217 20:59:27.603090  568189 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:59:27.603216  568189 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:59:27.603263  568189 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:59:27.603274  568189 certs.go:257] generating profile certs ...
	I1217 20:59:27.603376  568189 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key
	I1217 20:59:27.603463  568189 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key.3b1ba341
	I1217 20:59:27.603515  568189 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key
	I1217 20:59:27.603530  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 20:59:27.603543  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 20:59:27.603558  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 20:59:27.603572  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 20:59:27.603621  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 20:59:27.603634  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 20:59:27.603645  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 20:59:27.603655  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 20:59:27.603709  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:59:27.603744  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:59:27.603756  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:59:27.603782  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:59:27.603813  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:59:27.603839  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:59:27.603886  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:59:27.603922  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:27.603937  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:59:27.603948  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:59:27.604007  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:59:27.622811  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567/id_rsa Username:docker}
	I1217 20:59:27.711932  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1217 20:59:27.715648  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1217 20:59:27.723761  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1217 20:59:27.727209  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1217 20:59:27.735381  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1217 20:59:27.738998  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1217 20:59:27.747188  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1217 20:59:27.750785  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1217 20:59:27.758913  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1217 20:59:27.762427  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1217 20:59:27.770856  568189 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1217 20:59:27.774347  568189 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1217 20:59:27.782918  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:59:27.807233  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:59:27.825936  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:59:27.843705  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:59:27.863259  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 20:59:27.883764  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:59:27.904255  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:59:27.951575  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:59:27.979511  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:59:28.010041  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:59:28.032795  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:59:28.058120  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1217 20:59:28.072480  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1217 20:59:28.096660  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1217 20:59:28.111050  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1217 20:59:28.125599  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1217 20:59:28.139988  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1217 20:59:28.154668  568189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1217 20:59:28.168340  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:59:28.174792  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:28.182440  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:59:28.191221  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:28.195516  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:28.195766  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:28.244735  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:59:28.252179  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:59:28.259686  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:59:28.270202  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:59:28.274707  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:59:28.274826  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:59:28.316566  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:59:28.324532  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:59:28.331852  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:59:28.344147  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:59:28.349920  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:59:28.350026  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:59:28.397463  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:59:28.405538  568189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:59:28.409482  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:59:28.452939  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:59:28.494338  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:59:28.540466  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:59:28.582836  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:59:28.624131  568189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:59:28.667766  568189 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.3 crio true true} ...
	I1217 20:59:28.667874  568189 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-148567-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:59:28.667909  568189 kube-vip.go:115] generating kube-vip config ...
	I1217 20:59:28.667967  568189 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 20:59:28.681456  568189 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:59:28.681523  568189 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 20:59:28.681593  568189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:59:28.689896  568189 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:59:28.689971  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1217 20:59:28.697831  568189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 20:59:28.713126  568189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:59:28.729184  568189 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 20:59:28.745530  568189 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 20:59:28.749870  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:59:28.762032  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:28.899317  568189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:59:28.916505  568189 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:59:28.916882  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:28.921876  568189 out.go:179] * Verifying Kubernetes components...
	I1217 20:59:28.924845  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:29.067107  568189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:59:29.082388  568189 kapi.go:59] client config for ha-148567: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1217 20:59:29.082463  568189 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1217 20:59:29.082744  568189 node_ready.go:35] waiting up to 6m0s for node "ha-148567-m03" to be "Ready" ...
	I1217 20:59:29.086184  568189 node_ready.go:49] node "ha-148567-m03" is "Ready"
	I1217 20:59:29.086213  568189 node_ready.go:38] duration metric: took 3.444045ms for node "ha-148567-m03" to be "Ready" ...
	I1217 20:59:29.086226  568189 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:59:29.086308  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:29.587146  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:30.086424  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:30.587043  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:31.087307  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:31.587125  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:32.087199  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:32.586440  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:33.087014  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:33.587262  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:34.086776  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:34.586785  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:35.086598  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:35.587225  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:36.087060  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:36.587238  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:37.087356  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:37.586962  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:38.086425  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:38.587186  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:39.086440  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:39.587206  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:40.087337  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:40.586682  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:41.086960  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:41.587321  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:42.087299  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:42.587074  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:43.086416  568189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:59:43.100960  568189 api_server.go:72] duration metric: took 14.18440701s to wait for apiserver process to appear ...
	I1217 20:59:43.100982  568189 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:59:43.101000  568189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 20:59:43.111943  568189 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1217 20:59:43.113605  568189 api_server.go:141] control plane version: v1.34.3
	I1217 20:59:43.113627  568189 api_server.go:131] duration metric: took 12.639438ms to wait for apiserver health ...
	I1217 20:59:43.113635  568189 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:59:43.122498  568189 system_pods.go:59] 26 kube-system pods found
	I1217 20:59:43.122587  568189 system_pods.go:61] "coredns-66bc5c9577-l8xqv" [e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db] Running
	I1217 20:59:43.122609  568189 system_pods.go:61] "coredns-66bc5c9577-wgcmx" [4fbbda83-a7d2-41c0-98ea-066d493cd483] Running
	I1217 20:59:43.122628  568189 system_pods.go:61] "etcd-ha-148567" [d54c6b05-3d02-4af3-a488-ffe69d55c8c3] Running
	I1217 20:59:43.122660  568189 system_pods.go:61] "etcd-ha-148567-m02" [574da258-f8b2-4230-93a1-028b2c5765bd] Running
	I1217 20:59:43.122680  568189 system_pods.go:61] "etcd-ha-148567-m03" [c1a60f08-76af-48b5-afe6-9389bc0fca8b] Running
	I1217 20:59:43.122700  568189 system_pods.go:61] "kindnet-4xxcs" [3950f031-7524-40a2-aa03-f50e754478ed] Running
	I1217 20:59:43.122719  568189 system_pods.go:61] "kindnet-88zsz" [22238c23-358a-49e1-82d6-ac88a03c654f] Running
	I1217 20:59:43.122747  568189 system_pods.go:61] "kindnet-gwspj" [f5b895d0-8eba-4923-9537-bd07bd57d3b5] Running
	I1217 20:59:43.122769  568189 system_pods.go:61] "kindnet-pv94f" [6135ada6-3e1f-4b1b-a2e2-014a1eb62772] Running
	I1217 20:59:43.122787  568189 system_pods.go:61] "kube-apiserver-ha-148567" [3b830c1a-94be-41e9-bc3a-725b97833055] Running
	I1217 20:59:43.122807  568189 system_pods.go:61] "kube-apiserver-ha-148567-m02" [0f0e1b1c-b98e-46cd-a683-ce654b6fdeb1] Running
	I1217 20:59:43.122827  568189 system_pods.go:61] "kube-apiserver-ha-148567-m03" [1be33e29-7193-43bb-aaf4-e48cbf50abf9] Running
	I1217 20:59:43.122857  568189 system_pods.go:61] "kube-controller-manager-ha-148567" [8baf857a-5cbe-4de6-9aba-7a244ab2fb08] Running
	I1217 20:59:43.122886  568189 system_pods.go:61] "kube-controller-manager-ha-148567-m02" [da8d38ea-86a0-4b97-9512-0541ac0d2d44] Running
	I1217 20:59:43.122906  568189 system_pods.go:61] "kube-controller-manager-ha-148567-m03" [360b1069-90d9-45be-a0c3-3c81c88154e0] Running
	I1217 20:59:43.122929  568189 system_pods.go:61] "kube-proxy-8nmpd" [52f8f6d6-8de7-4758-b35e-843a2a6ed562] Running
	I1217 20:59:43.122960  568189 system_pods.go:61] "kube-proxy-9n5cb" [775371d6-bc7d-40f6-8e0f-655f265828ba] Running
	I1217 20:59:43.122982  568189 system_pods.go:61] "kube-proxy-9rv8b" [7e160782-bab5-49b5-950c-377bde3bec7f] Running
	I1217 20:59:43.123002  568189 system_pods.go:61] "kube-proxy-cbk47" [149fd1d8-7762-49fd-81da-23047452dc4a] Running
	I1217 20:59:43.123021  568189 system_pods.go:61] "kube-scheduler-ha-148567" [a04a9bb2-7f73-4940-9952-049c4b406086] Running
	I1217 20:59:43.123040  568189 system_pods.go:61] "kube-scheduler-ha-148567-m02" [534b3ee9-d48c-450d-b20d-308e0a13a720] Running
	I1217 20:59:43.123071  568189 system_pods.go:61] "kube-scheduler-ha-148567-m03" [08f0c7d8-2adc-444e-a8db-5ad41340d101] Running
	I1217 20:59:43.123099  568189 system_pods.go:61] "kube-vip-ha-148567" [05c573d7-3cc9-4d90-8a3e-154ba3c7423a] Running
	I1217 20:59:43.123129  568189 system_pods.go:61] "kube-vip-ha-148567-m02" [4c4de02d-d57f-4a66-983e-12dbdb0f3521] Running
	I1217 20:59:43.123149  568189 system_pods.go:61] "kube-vip-ha-148567-m03" [37a516e3-0f3b-413f-be72-a445c095667f] Running
	I1217 20:59:43.123176  568189 system_pods.go:61] "storage-provisioner" [b613dd7f-cf83-4a6a-a6b8-c7b6282790ab] Running
	I1217 20:59:43.123204  568189 system_pods.go:74] duration metric: took 9.561362ms to wait for pod list to return data ...
	I1217 20:59:43.123228  568189 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:59:43.126857  568189 default_sa.go:45] found service account: "default"
	I1217 20:59:43.126922  568189 default_sa.go:55] duration metric: took 3.673226ms for default service account to be created ...
	I1217 20:59:43.126952  568189 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 20:59:43.134811  568189 system_pods.go:86] 26 kube-system pods found
	I1217 20:59:43.134893  568189 system_pods.go:89] "coredns-66bc5c9577-l8xqv" [e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db] Running
	I1217 20:59:43.134915  568189 system_pods.go:89] "coredns-66bc5c9577-wgcmx" [4fbbda83-a7d2-41c0-98ea-066d493cd483] Running
	I1217 20:59:43.134937  568189 system_pods.go:89] "etcd-ha-148567" [d54c6b05-3d02-4af3-a488-ffe69d55c8c3] Running
	I1217 20:59:43.134966  568189 system_pods.go:89] "etcd-ha-148567-m02" [574da258-f8b2-4230-93a1-028b2c5765bd] Running
	I1217 20:59:43.134990  568189 system_pods.go:89] "etcd-ha-148567-m03" [c1a60f08-76af-48b5-afe6-9389bc0fca8b] Running
	I1217 20:59:43.135010  568189 system_pods.go:89] "kindnet-4xxcs" [3950f031-7524-40a2-aa03-f50e754478ed] Running
	I1217 20:59:43.135031  568189 system_pods.go:89] "kindnet-88zsz" [22238c23-358a-49e1-82d6-ac88a03c654f] Running
	I1217 20:59:43.135052  568189 system_pods.go:89] "kindnet-gwspj" [f5b895d0-8eba-4923-9537-bd07bd57d3b5] Running
	I1217 20:59:43.135081  568189 system_pods.go:89] "kindnet-pv94f" [6135ada6-3e1f-4b1b-a2e2-014a1eb62772] Running
	I1217 20:59:43.135118  568189 system_pods.go:89] "kube-apiserver-ha-148567" [3b830c1a-94be-41e9-bc3a-725b97833055] Running
	I1217 20:59:43.135138  568189 system_pods.go:89] "kube-apiserver-ha-148567-m02" [0f0e1b1c-b98e-46cd-a683-ce654b6fdeb1] Running
	I1217 20:59:43.135160  568189 system_pods.go:89] "kube-apiserver-ha-148567-m03" [1be33e29-7193-43bb-aaf4-e48cbf50abf9] Running
	I1217 20:59:43.135194  568189 system_pods.go:89] "kube-controller-manager-ha-148567" [8baf857a-5cbe-4de6-9aba-7a244ab2fb08] Running
	I1217 20:59:43.135222  568189 system_pods.go:89] "kube-controller-manager-ha-148567-m02" [da8d38ea-86a0-4b97-9512-0541ac0d2d44] Running
	I1217 20:59:43.135243  568189 system_pods.go:89] "kube-controller-manager-ha-148567-m03" [360b1069-90d9-45be-a0c3-3c81c88154e0] Running
	I1217 20:59:43.135263  568189 system_pods.go:89] "kube-proxy-8nmpd" [52f8f6d6-8de7-4758-b35e-843a2a6ed562] Running
	I1217 20:59:43.135283  568189 system_pods.go:89] "kube-proxy-9n5cb" [775371d6-bc7d-40f6-8e0f-655f265828ba] Running
	I1217 20:59:43.135311  568189 system_pods.go:89] "kube-proxy-9rv8b" [7e160782-bab5-49b5-950c-377bde3bec7f] Running
	I1217 20:59:43.135338  568189 system_pods.go:89] "kube-proxy-cbk47" [149fd1d8-7762-49fd-81da-23047452dc4a] Running
	I1217 20:59:43.135357  568189 system_pods.go:89] "kube-scheduler-ha-148567" [a04a9bb2-7f73-4940-9952-049c4b406086] Running
	I1217 20:59:43.135375  568189 system_pods.go:89] "kube-scheduler-ha-148567-m02" [534b3ee9-d48c-450d-b20d-308e0a13a720] Running
	I1217 20:59:43.135394  568189 system_pods.go:89] "kube-scheduler-ha-148567-m03" [08f0c7d8-2adc-444e-a8db-5ad41340d101] Running
	I1217 20:59:43.135423  568189 system_pods.go:89] "kube-vip-ha-148567" [05c573d7-3cc9-4d90-8a3e-154ba3c7423a] Running
	I1217 20:59:43.135455  568189 system_pods.go:89] "kube-vip-ha-148567-m02" [4c4de02d-d57f-4a66-983e-12dbdb0f3521] Running
	I1217 20:59:43.135477  568189 system_pods.go:89] "kube-vip-ha-148567-m03" [37a516e3-0f3b-413f-be72-a445c095667f] Running
	I1217 20:59:43.135495  568189 system_pods.go:89] "storage-provisioner" [b613dd7f-cf83-4a6a-a6b8-c7b6282790ab] Running
	I1217 20:59:43.135529  568189 system_pods.go:126] duration metric: took 8.54658ms to wait for k8s-apps to be running ...
	I1217 20:59:43.135556  568189 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 20:59:43.135647  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:59:43.150029  568189 system_svc.go:56] duration metric: took 14.465953ms WaitForService to wait for kubelet
	I1217 20:59:43.150071  568189 kubeadm.go:587] duration metric: took 14.233522691s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:59:43.150090  568189 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:59:43.154561  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:43.154592  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:43.154613  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:43.154619  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:43.154624  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:43.154628  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:43.154641  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:43.154646  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:43.154651  568189 node_conditions.go:105] duration metric: took 4.555345ms to run NodePressure ...
	I1217 20:59:43.154681  568189 start.go:242] waiting for startup goroutines ...
	I1217 20:59:43.154709  568189 start.go:256] writing updated cluster config ...
	I1217 20:59:43.158527  568189 out.go:203] 
	I1217 20:59:43.161746  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:43.161871  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:59:43.165329  568189 out.go:179] * Starting "ha-148567-m04" worker node in "ha-148567" cluster
	I1217 20:59:43.168355  568189 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:59:43.171262  568189 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:59:43.174132  568189 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:59:43.174410  568189 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:59:43.174454  568189 cache.go:65] Caching tarball of preloaded images
	I1217 20:59:43.174570  568189 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 20:59:43.174613  568189 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:59:43.174766  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:59:43.198461  568189 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:59:43.198481  568189 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 20:59:43.198493  568189 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:59:43.198516  568189 start.go:360] acquireMachinesLock for ha-148567-m04: {Name:mk553b42915df9bd549a5c28a2faaee12bc3aaa4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:59:43.198572  568189 start.go:364] duration metric: took 34.134µs to acquireMachinesLock for "ha-148567-m04"
	I1217 20:59:43.198597  568189 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:59:43.198602  568189 fix.go:54] fixHost starting: m04
	I1217 20:59:43.198879  568189 cli_runner.go:164] Run: docker container inspect ha-148567-m04 --format={{.State.Status}}
	I1217 20:59:43.217750  568189 fix.go:112] recreateIfNeeded on ha-148567-m04: state=Stopped err=<nil>
	W1217 20:59:43.217781  568189 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:59:43.221013  568189 out.go:252] * Restarting existing docker container for "ha-148567-m04" ...
	I1217 20:59:43.221102  568189 cli_runner.go:164] Run: docker start ha-148567-m04
	I1217 20:59:43.516797  568189 cli_runner.go:164] Run: docker container inspect ha-148567-m04 --format={{.State.Status}}
	I1217 20:59:43.540017  568189 kic.go:430] container "ha-148567-m04" state is running.
	I1217 20:59:43.540568  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m04
	I1217 20:59:43.574859  568189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/config.json ...
	I1217 20:59:43.575129  568189 machine.go:94] provisionDockerMachine start ...
	I1217 20:59:43.575199  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:43.606726  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:43.607040  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1217 20:59:43.607056  568189 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:59:43.607773  568189 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58242->127.0.0.1:33223: read: connection reset by peer
	I1217 20:59:46.803819  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567-m04
	
	I1217 20:59:46.803848  568189 ubuntu.go:182] provisioning hostname "ha-148567-m04"
	I1217 20:59:46.803941  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:46.836537  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:46.836852  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1217 20:59:46.836874  568189 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-148567-m04 && echo "ha-148567-m04" | sudo tee /etc/hostname
	I1217 20:59:47.026899  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-148567-m04
	
	I1217 20:59:47.027037  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:47.062751  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:47.063061  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1217 20:59:47.063082  568189 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-148567-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-148567-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-148567-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:59:47.256926  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:59:47.257018  568189 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 20:59:47.257283  568189 ubuntu.go:190] setting up certificates
	I1217 20:59:47.257314  568189 provision.go:84] configureAuth start
	I1217 20:59:47.257398  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m04
	I1217 20:59:47.295834  568189 provision.go:143] copyHostCerts
	I1217 20:59:47.295877  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:59:47.295912  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 20:59:47.295919  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 20:59:47.296003  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 20:59:47.296090  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:59:47.296108  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 20:59:47.296113  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 20:59:47.296139  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 20:59:47.296196  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:59:47.296215  568189 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 20:59:47.296219  568189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 20:59:47.296250  568189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 20:59:47.296313  568189 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.ha-148567-m04 san=[127.0.0.1 192.168.49.5 ha-148567-m04 localhost minikube]
	I1217 20:59:47.379272  568189 provision.go:177] copyRemoteCerts
	I1217 20:59:47.379345  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:59:47.379394  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:47.403843  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m04/id_rsa Username:docker}
	I1217 20:59:47.518369  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 20:59:47.518441  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:59:47.576564  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 20:59:47.576687  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 20:59:47.604142  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 20:59:47.604201  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:59:47.631334  568189 provision.go:87] duration metric: took 373.991006ms to configureAuth
	I1217 20:59:47.631359  568189 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:59:47.631685  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:47.631793  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:47.657183  568189 main.go:143] libmachine: Using SSH client type: native
	I1217 20:59:47.657502  568189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1217 20:59:47.657518  568189 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:59:48.158234  568189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:59:48.158306  568189 machine.go:97] duration metric: took 4.583160847s to provisionDockerMachine
	I1217 20:59:48.158332  568189 start.go:293] postStartSetup for "ha-148567-m04" (driver="docker")
	I1217 20:59:48.158359  568189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:59:48.158470  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:59:48.158549  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:48.182261  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m04/id_rsa Username:docker}
	I1217 20:59:48.298135  568189 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:59:48.311846  568189 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:59:48.311884  568189 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:59:48.311907  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 20:59:48.311974  568189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 20:59:48.312067  568189 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 20:59:48.312079  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /etc/ssl/certs/4884122.pem
	I1217 20:59:48.312200  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:59:48.329656  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:59:48.373531  568189 start.go:296] duration metric: took 215.167593ms for postStartSetup
	I1217 20:59:48.373663  568189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:59:48.373725  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:48.400005  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m04/id_rsa Username:docker}
	I1217 20:59:48.502218  568189 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:59:48.508483  568189 fix.go:56] duration metric: took 5.309874613s for fixHost
	I1217 20:59:48.508507  568189 start.go:83] releasing machines lock for "ha-148567-m04", held for 5.309926708s
	I1217 20:59:48.508573  568189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m04
	I1217 20:59:48.542166  568189 out.go:179] * Found network options:
	I1217 20:59:48.545031  568189 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1217 20:59:48.547822  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 20:59:48.547865  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 20:59:48.547882  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 20:59:48.547964  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:59:48.548007  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:59:48.548015  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:59:48.548043  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:59:48.548068  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:59:48.548092  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:59:48.548135  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:59:48.548169  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:59:48.548185  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:48.548196  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:59:48.548214  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:59:48.548266  568189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:59:48.578677  568189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m04/id_rsa Username:docker}
	I1217 20:59:48.719848  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:59:48.753882  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:59:48.792107  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:59:48.804085  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:59:48.816313  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:59:48.832761  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:59:48.840746  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:59:48.840863  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:59:48.902488  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:59:48.912364  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:59:48.923914  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:59:48.940092  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:59:48.947071  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:59:48.947150  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:59:49.021813  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:59:49.034659  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:49.053384  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:59:49.069859  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:49.077887  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:49.078004  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:49.137254  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:59:49.153091  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 20:59:49.159186  568189 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	W1217 20:59:49.165011  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 20:59:49.165053  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 20:59:49.165063  568189 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 20:59:49.165151  568189 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:59:49.165273  568189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:59:49.359347  568189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:59:49.368376  568189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:59:49.368491  568189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:59:49.391939  568189 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:59:49.392014  568189 start.go:496] detecting cgroup driver to use...
	I1217 20:59:49.392069  568189 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 20:59:49.392143  568189 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:59:49.427410  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:59:49.445092  568189 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:59:49.445199  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:59:49.463345  568189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:59:49.480078  568189 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:59:49.663757  568189 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:59:49.840193  568189 docker.go:234] disabling docker service ...
	I1217 20:59:49.840317  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:59:49.860557  568189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:59:49.877087  568189 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:59:50.055711  568189 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:59:50.231385  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:59:50.254028  568189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:59:50.285776  568189 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:59:50.285901  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:50.299125  568189 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 20:59:50.299249  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:50.308719  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:50.317674  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:50.326552  568189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:59:50.334774  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:50.343683  568189 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:50.357610  568189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:59:50.371978  568189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:59:50.381012  568189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:59:50.389890  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:50.573931  568189 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:59:50.817600  568189 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:59:50.817730  568189 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:59:50.823707  568189 start.go:564] Will wait 60s for crictl version
	I1217 20:59:50.823823  568189 ssh_runner.go:195] Run: which crictl
	I1217 20:59:50.829375  568189 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:59:50.907046  568189 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:59:50.907198  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:59:50.968526  568189 ssh_runner.go:195] Run: crio --version
	I1217 20:59:51.022232  568189 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 20:59:51.025095  568189 out.go:179]   - env NO_PROXY=192.168.49.2
	I1217 20:59:51.028040  568189 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1217 20:59:51.031031  568189 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1217 20:59:51.033982  568189 cli_runner.go:164] Run: docker network inspect ha-148567 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:59:51.058290  568189 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 20:59:51.064756  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:59:51.084472  568189 mustload.go:66] Loading cluster: ha-148567
	I1217 20:59:51.084822  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:51.085173  568189 cli_runner.go:164] Run: docker container inspect ha-148567 --format={{.State.Status}}
	I1217 20:59:51.122113  568189 host.go:66] Checking if "ha-148567" exists ...
	I1217 20:59:51.122410  568189 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567 for IP: 192.168.49.5
	I1217 20:59:51.122425  568189 certs.go:195] generating shared ca certs ...
	I1217 20:59:51.122444  568189 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:59:51.122555  568189 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 20:59:51.122603  568189 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 20:59:51.122617  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 20:59:51.122638  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 20:59:51.122649  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 20:59:51.122665  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 20:59:51.122723  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 20:59:51.122759  568189 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 20:59:51.122771  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:59:51.122798  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:59:51.122830  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:59:51.122855  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 20:59:51.122904  568189 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 20:59:51.122943  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 20:59:51.122961  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:51.122973  568189 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 20:59:51.122997  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:59:51.146685  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 20:59:51.175270  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:59:51.202157  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 20:59:51.226103  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 20:59:51.248874  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:59:51.269857  568189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 20:59:51.310997  568189 ssh_runner.go:195] Run: openssl version
	I1217 20:59:51.319341  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 20:59:51.330020  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 20:59:51.339343  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 20:59:51.350841  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 20:59:51.350957  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 20:59:51.400605  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:59:51.414512  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 20:59:51.424023  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 20:59:51.432640  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 20:59:51.437401  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 20:59:51.437481  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 20:59:51.482765  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:59:51.491449  568189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:51.501741  568189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:59:51.515339  568189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:51.520544  568189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:51.520666  568189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:59:51.565528  568189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:59:51.574279  568189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:59:51.579195  568189 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 20:59:51.579288  568189 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.3  false true} ...
	I1217 20:59:51.579397  568189 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-148567-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:ha-148567 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:59:51.579514  568189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:59:51.588520  568189 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:59:51.588644  568189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1217 20:59:51.600506  568189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 20:59:51.617987  568189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:59:51.637341  568189 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 20:59:51.641707  568189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:59:51.653386  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:51.824077  568189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:59:51.843148  568189 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1217 20:59:51.843522  568189 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:59:51.848815  568189 out.go:179] * Verifying Kubernetes components...
	I1217 20:59:51.852560  568189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:59:51.982897  568189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:59:52.000066  568189 kapi.go:59] client config for ha-148567: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1217 20:59:52.000192  568189 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1217 20:59:52.000451  568189 node_ready.go:35] waiting up to 6m0s for node "ha-148567-m04" to be "Ready" ...
	I1217 20:59:52.006183  568189 node_ready.go:49] node "ha-148567-m04" is "Ready"
	I1217 20:59:52.006239  568189 node_ready.go:38] duration metric: took 5.759781ms for node "ha-148567-m04" to be "Ready" ...
	I1217 20:59:52.006258  568189 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 20:59:52.006601  568189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:59:52.047225  568189 system_svc.go:56] duration metric: took 40.959365ms WaitForService to wait for kubelet
	I1217 20:59:52.047255  568189 kubeadm.go:587] duration metric: took 203.674646ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:59:52.047276  568189 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:59:52.051902  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:52.051946  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:52.051960  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:52.051980  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:52.051986  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:52.051991  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:52.052000  568189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 20:59:52.052005  568189 node_conditions.go:123] node cpu capacity is 2
	I1217 20:59:52.052015  568189 node_conditions.go:105] duration metric: took 4.734079ms to run NodePressure ...
	I1217 20:59:52.052027  568189 start.go:242] waiting for startup goroutines ...
	I1217 20:59:52.052063  568189 start.go:256] writing updated cluster config ...
	I1217 20:59:52.052403  568189 ssh_runner.go:195] Run: rm -f paused
	I1217 20:59:52.057083  568189 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:59:52.057721  568189 kapi.go:59] client config for ha-148567: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/ha-148567/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:59:52.075282  568189 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-l8xqv" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:52.083372  568189 pod_ready.go:94] pod "coredns-66bc5c9577-l8xqv" is "Ready"
	I1217 20:59:52.083403  568189 pod_ready.go:86] duration metric: took 8.086341ms for pod "coredns-66bc5c9577-l8xqv" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:52.083414  568189 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wgcmx" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:52.104642  568189 pod_ready.go:94] pod "coredns-66bc5c9577-wgcmx" is "Ready"
	I1217 20:59:52.104676  568189 pod_ready.go:86] duration metric: took 21.254359ms for pod "coredns-66bc5c9577-wgcmx" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:52.108222  568189 pod_ready.go:83] waiting for pod "etcd-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:52.114067  568189 pod_ready.go:94] pod "etcd-ha-148567" is "Ready"
	I1217 20:59:52.114095  568189 pod_ready.go:86] duration metric: took 5.843992ms for pod "etcd-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:52.114104  568189 pod_ready.go:83] waiting for pod "etcd-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 20:59:54.121101  568189 pod_ready.go:104] pod "etcd-ha-148567-m02" is not "Ready", error: <nil>
	W1217 20:59:56.121594  568189 pod_ready.go:104] pod "etcd-ha-148567-m02" is not "Ready", error: <nil>
	I1217 20:59:58.129487  568189 pod_ready.go:94] pod "etcd-ha-148567-m02" is "Ready"
	I1217 20:59:58.129512  568189 pod_ready.go:86] duration metric: took 6.015400557s for pod "etcd-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.129523  568189 pod_ready.go:83] waiting for pod "etcd-ha-148567-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.142269  568189 pod_ready.go:94] pod "etcd-ha-148567-m03" is "Ready"
	I1217 20:59:58.142292  568189 pod_ready.go:86] duration metric: took 12.762885ms for pod "etcd-ha-148567-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.146453  568189 pod_ready.go:83] waiting for pod "kube-apiserver-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.164280  568189 pod_ready.go:94] pod "kube-apiserver-ha-148567" is "Ready"
	I1217 20:59:58.164356  568189 pod_ready.go:86] duration metric: took 17.878983ms for pod "kube-apiserver-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.164381  568189 pod_ready.go:83] waiting for pod "kube-apiserver-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.259174  568189 request.go:683] "Waited before sending request" delay="88.189794ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m02"
	I1217 20:59:58.268569  568189 pod_ready.go:94] pod "kube-apiserver-ha-148567-m02" is "Ready"
	I1217 20:59:58.268593  568189 pod_ready.go:86] duration metric: took 104.192931ms for pod "kube-apiserver-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.268603  568189 pod_ready.go:83] waiting for pod "kube-apiserver-ha-148567-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.458982  568189 request.go:683] "Waited before sending request" delay="190.303242ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-148567-m03"
	I1217 20:59:58.658315  568189 request.go:683] "Waited before sending request" delay="195.215539ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m03"
	I1217 20:59:58.661689  568189 pod_ready.go:94] pod "kube-apiserver-ha-148567-m03" is "Ready"
	I1217 20:59:58.661723  568189 pod_ready.go:86] duration metric: took 393.113399ms for pod "kube-apiserver-ha-148567-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:58.859073  568189 request.go:683] "Waited before sending request" delay="197.228659ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1217 20:59:58.863798  568189 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:59.059209  568189 request.go:683] "Waited before sending request" delay="195.315815ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-148567"
	I1217 20:59:59.258903  568189 request.go:683] "Waited before sending request" delay="196.340082ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567"
	I1217 20:59:59.265017  568189 pod_ready.go:94] pod "kube-controller-manager-ha-148567" is "Ready"
	I1217 20:59:59.265041  568189 pod_ready.go:86] duration metric: took 401.217693ms for pod "kube-controller-manager-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:59.265051  568189 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:59.458390  568189 request.go:683] "Waited before sending request" delay="193.253489ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-148567-m02"
	I1217 20:59:59.658551  568189 request.go:683] "Waited before sending request" delay="180.126333ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m02"
	I1217 20:59:59.662062  568189 pod_ready.go:94] pod "kube-controller-manager-ha-148567-m02" is "Ready"
	I1217 20:59:59.662093  568189 pod_ready.go:86] duration metric: took 397.034758ms for pod "kube-controller-manager-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:59.662104  568189 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-148567-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:59:59.858282  568189 request.go:683] "Waited before sending request" delay="196.102269ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-148567-m03"
	I1217 21:00:00.075408  568189 request.go:683] "Waited before sending request" delay="213.781913ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m03"
	I1217 21:00:00.089107  568189 pod_ready.go:94] pod "kube-controller-manager-ha-148567-m03" is "Ready"
	I1217 21:00:00.089136  568189 pod_ready.go:86] duration metric: took 427.024958ms for pod "kube-controller-manager-ha-148567-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:00.258516  568189 request.go:683] "Waited before sending request" delay="169.272025ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1217 21:00:00.322743  568189 pod_ready.go:83] waiting for pod "kube-proxy-8nmpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:00.459982  568189 request.go:683] "Waited before sending request" delay="137.098152ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8nmpd"
	I1217 21:00:00.701120  568189 pod_ready.go:94] pod "kube-proxy-8nmpd" is "Ready"
	I1217 21:00:00.701146  568189 pod_ready.go:86] duration metric: took 378.365284ms for pod "kube-proxy-8nmpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:00.701157  568189 pod_ready.go:83] waiting for pod "kube-proxy-9n5cb" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:00.858493  568189 request.go:683] "Waited before sending request" delay="157.248259ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9n5cb"
	I1217 21:00:01.058920  568189 request.go:683] "Waited before sending request" delay="150.537073ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567"
	I1217 21:00:01.068198  568189 pod_ready.go:94] pod "kube-proxy-9n5cb" is "Ready"
	I1217 21:00:01.068230  568189 pod_ready.go:86] duration metric: took 367.062133ms for pod "kube-proxy-9n5cb" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:01.068243  568189 pod_ready.go:83] waiting for pod "kube-proxy-9rv8b" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:01.262645  568189 request.go:683] "Waited before sending request" delay="194.315293ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9rv8b"
	I1217 21:00:01.458640  568189 request.go:683] "Waited before sending request" delay="153.080094ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m04"
	I1217 21:00:01.462978  568189 pod_ready.go:94] pod "kube-proxy-9rv8b" is "Ready"
	I1217 21:00:01.463012  568189 pod_ready.go:86] duration metric: took 394.75948ms for pod "kube-proxy-9rv8b" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:01.463024  568189 pod_ready.go:83] waiting for pod "kube-proxy-cbk47" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:01.658301  568189 request.go:683] "Waited before sending request" delay="195.184202ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cbk47"
	I1217 21:00:01.858277  568189 request.go:683] "Waited before sending request" delay="195.25946ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m02"
	I1217 21:00:01.862378  568189 pod_ready.go:94] pod "kube-proxy-cbk47" is "Ready"
	I1217 21:00:01.862409  568189 pod_ready.go:86] duration metric: took 399.37762ms for pod "kube-proxy-cbk47" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:02.058910  568189 request.go:683] "Waited before sending request" delay="196.359519ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1217 21:00:02.063347  568189 pod_ready.go:83] waiting for pod "kube-scheduler-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:02.258828  568189 request.go:683] "Waited before sending request" delay="195.344917ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-148567"
	I1217 21:00:02.458794  568189 request.go:683] "Waited before sending request" delay="192.303347ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567"
	I1217 21:00:02.462249  568189 pod_ready.go:94] pod "kube-scheduler-ha-148567" is "Ready"
	I1217 21:00:02.462330  568189 pod_ready.go:86] duration metric: took 398.949995ms for pod "kube-scheduler-ha-148567" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:02.462347  568189 pod_ready.go:83] waiting for pod "kube-scheduler-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:00:02.658751  568189 request.go:683] "Waited before sending request" delay="196.3297ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-148567-m02"
	I1217 21:00:02.858900  568189 request.go:683] "Waited before sending request" delay="196.191697ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m02"
	I1217 21:00:03.058800  568189 request.go:683] "Waited before sending request" delay="96.270325ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-148567-m02"
	I1217 21:00:03.258609  568189 request.go:683] "Waited before sending request" delay="196.310803ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m02"
	I1217 21:00:03.658820  568189 request.go:683] "Waited before sending request" delay="192.320766ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m02"
	I1217 21:00:04.059107  568189 request.go:683] "Waited before sending request" delay="91.269847ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-148567-m02"
	W1217 21:00:04.473348  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:06.969463  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:08.970426  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:11.469067  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:13.469840  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:15.969240  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:17.970193  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:20.472073  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:22.968559  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:24.969719  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:26.969862  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:29.470421  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:31.972330  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:34.469131  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:36.470941  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:38.970444  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:41.469557  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:43.469705  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:45.969149  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:47.969777  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:50.469751  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:52.969483  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:54.969568  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:57.468587  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:00:59.469765  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:01.470220  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:03.968803  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:05.969289  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:07.970839  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:10.469532  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:12.470536  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:14.968677  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:16.969870  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:19.469773  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:21.473506  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:23.970699  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:26.469423  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:28.470176  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:30.970041  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:33.468708  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:35.470792  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:37.470979  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:39.969393  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:41.971168  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:43.973569  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:46.469101  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:48.469649  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:50.469830  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:52.969858  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:55.468819  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:57.469502  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:01:59.473027  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:01.969273  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:03.970006  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:06.469903  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:08.470528  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:10.969500  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:12.969708  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:15.469498  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:17.969560  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:20.471040  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:22.970398  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:25.470111  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:27.969892  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:30.470124  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:32.969858  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:34.970684  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:36.970849  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:39.468689  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:41.469503  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:43.969114  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:45.969652  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:47.970284  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:50.469486  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:52.469974  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:54.470624  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:56.969815  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:02:59.469488  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:01.469627  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:03.970512  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:06.469961  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:08.969174  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:10.969626  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:12.970730  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:15.469047  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:17.470130  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:19.473448  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:21.969933  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:23.970894  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:26.470713  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:28.968830  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:30.970218  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:33.468960  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:35.469770  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:37.968748  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:39.968975  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:41.969305  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:44.468880  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:46.469851  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:48.968886  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	W1217 21:03:50.969624  568189 pod_ready.go:104] pod "kube-scheduler-ha-148567-m02" is not "Ready", error: <nil>
	I1217 21:03:52.057311  568189 pod_ready.go:86] duration metric: took 3m49.59494638s for pod "kube-scheduler-ha-148567-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 21:03:52.057351  568189 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-scheduler" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1217 21:03:52.057365  568189 pod_ready.go:40] duration metric: took 4m0.000201029s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 21:03:52.060383  568189 out.go:203] 
	W1217 21:03:52.063300  568189 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1217 21:03:52.066188  568189 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 20:59:39 ha-148567 crio[710]: time="2025-12-17T20:59:39.629560273Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9167353b-2bf3-479e-964a-74f0d40c8545 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:59:39 ha-148567 crio[710]: time="2025-12-17T20:59:39.630687753Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=4c1f126a-a7f4-4eb6-8471-96e15a4f4b97 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:59:39 ha-148567 crio[710]: time="2025-12-17T20:59:39.630831533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:59:39 ha-148567 crio[710]: time="2025-12-17T20:59:39.637333781Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:59:39 ha-148567 crio[710]: time="2025-12-17T20:59:39.637640641Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8a5b38e85e717e293c44b562e20cb9e6c498fea8bc90e344c95ff4782baf3677/merged/etc/passwd: no such file or directory"
	Dec 17 20:59:39 ha-148567 crio[710]: time="2025-12-17T20:59:39.637732916Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8a5b38e85e717e293c44b562e20cb9e6c498fea8bc90e344c95ff4782baf3677/merged/etc/group: no such file or directory"
	Dec 17 20:59:39 ha-148567 crio[710]: time="2025-12-17T20:59:39.638163716Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:59:39 ha-148567 crio[710]: time="2025-12-17T20:59:39.663985219Z" level=info msg="Created container 5b4cc9c722ee34aabaca591c96b1752871791b2d6c7d43442e7dd50f3ee524e3: kube-system/storage-provisioner/storage-provisioner" id=4c1f126a-a7f4-4eb6-8471-96e15a4f4b97 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:59:39 ha-148567 crio[710]: time="2025-12-17T20:59:39.666283281Z" level=info msg="Starting container: 5b4cc9c722ee34aabaca591c96b1752871791b2d6c7d43442e7dd50f3ee524e3" id=dfa56923-d43f-4222-845d-8de9ac088dd0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:59:39 ha-148567 crio[710]: time="2025-12-17T20:59:39.668658423Z" level=info msg="Started container" PID=1522 containerID=5b4cc9c722ee34aabaca591c96b1752871791b2d6c7d43442e7dd50f3ee524e3 description=kube-system/storage-provisioner/storage-provisioner id=dfa56923-d43f-4222-845d-8de9ac088dd0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4ef70d8ffc0a00de889fdcd244ebaeaece44bf36c2fbca9eaac20ddec8a9e090
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.371783983Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.376846411Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.376883285Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.376913004Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.394117485Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.394157485Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.394177596Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.412098163Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.412258182Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.412360066Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.424241463Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.424398651Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.424482041Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.438777079Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 20:59:49 ha-148567 crio[710]: time="2025-12-17T20:59:49.438966218Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	5b4cc9c722ee3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Running             storage-provisioner       2                   4ef70d8ffc0a0       storage-provisioner                 kube-system
	c001c946de439       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   1                   929945e3d1a3e       coredns-66bc5c9577-wgcmx            kube-system
	3d59d54580266       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   1                   bb9c16be2a1ae       coredns-66bc5c9577-l8xqv            kube-system
	9c2f443274791       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162   4 minutes ago       Running             kube-proxy                1                   45b30456d0d02       kube-proxy-9n5cb                    kube-system
	e05d2769fa75c       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   5 minutes ago       Running             busybox                   1                   48e89d3a90ce3       busybox-7b57f96db7-wpzp9            default
	36cc5a99d5800       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Exited              storage-provisioner       1                   4ef70d8ffc0a0       storage-provisioner                 kube-system
	be62aea7ae9e3       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   5 minutes ago       Running             kindnet-cni               1                   889abb571076e       kindnet-pv94f                       kube-system
	494e8522562ca       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22   5 minutes ago       Running             kube-controller-manager   4                   762c1badea8ef       kube-controller-manager-ha-148567   kube-system
	58f3a197004f5       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896   5 minutes ago       Running             kube-apiserver            2                   1572377e842c5       kube-apiserver-ha-148567            kube-system
	bf8c2f6823453       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22   5 minutes ago       Exited              kube-controller-manager   3                   762c1badea8ef       kube-controller-manager-ha-148567   kube-system
	7b48eea7424a1       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   6 minutes ago       Running             etcd                      1                   97798849c8ba9       etcd-ha-148567                      kube-system
	055c04d40b9a0       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6   6 minutes ago       Running             kube-scheduler            1                   778e1fadf4b3d       kube-scheduler-ha-148567            kube-system
	4f2a8a504377b       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   6 minutes ago       Running             kube-vip                  0                   c8f53dfc2b78e       kube-vip-ha-148567                  kube-system
	0273f065d6acf       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896   6 minutes ago       Exited              kube-apiserver            1                   1572377e842c5       kube-apiserver-ha-148567            kube-system
	
	
	==> coredns [3d59d545802667bca4afd18f76c3bf960baeb6a6cfa8136dd546f29b9af19a5f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53197 - 52993 "HINFO IN 5822912137944380976.4895998307528040920. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020505453s
	
	
	==> coredns [c001c946de4393f262b155b7097a5e53a29de886277d7d4f4b38fbec1514bf01] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49334 - 9860 "HINFO IN 3026609084912095735.2907426380693665954. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.042306577s
	
	
	==> describe nodes <==
	Name:               ha-148567
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-148567
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=ha-148567
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T20_52_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 20:52:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-148567
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 21:04:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 21:03:22 +0000   Wed, 17 Dec 2025 20:52:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 21:03:22 +0000   Wed, 17 Dec 2025 20:52:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 21:03:22 +0000   Wed, 17 Dec 2025 20:52:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 21:03:22 +0000   Wed, 17 Dec 2025 20:53:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-148567
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                c516dc0e-66c5-424a-98b8-b8a74ede6e3d
	  Boot ID:                    7e62835b-3b3c-4293-85c7-fb0aa46e4d00
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wpzp9             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 coredns-66bc5c9577-l8xqv             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 coredns-66bc5c9577-wgcmx             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 etcd-ha-148567                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-pv94f                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-148567             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-148567    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-9n5cb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-148567             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-148567                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m57s                  kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)      kubelet          Node ha-148567 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-148567 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-148567 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                    kubelet          Node ha-148567 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                    kubelet          Node ha-148567 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m                    kubelet          Node ha-148567 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           11m                    node-controller  Node ha-148567 event: Registered Node ha-148567 in Controller
	  Normal   NodeReady                10m                    kubelet          Node ha-148567 status is now: NodeReady
	  Normal   RegisteredNode           10m                    node-controller  Node ha-148567 event: Registered Node ha-148567 in Controller
	  Normal   RegisteredNode           9m41s                  node-controller  Node ha-148567 event: Registered Node ha-148567 in Controller
	  Normal   RegisteredNode           7m20s                  node-controller  Node ha-148567 event: Registered Node ha-148567 in Controller
	  Normal   NodeHasSufficientMemory  6m50s (x8 over 6m50s)  kubelet          Node ha-148567 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m50s (x8 over 6m50s)  kubelet          Node ha-148567 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m50s (x8 over 6m50s)  kubelet          Node ha-148567 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m55s                  node-controller  Node ha-148567 event: Registered Node ha-148567 in Controller
	  Normal   RegisteredNode           4m51s                  node-controller  Node ha-148567 event: Registered Node ha-148567 in Controller
	  Normal   RegisteredNode           3m47s                  node-controller  Node ha-148567 event: Registered Node ha-148567 in Controller
	
	
	Name:               ha-148567-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-148567-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=ha-148567
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_17T20_53_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 20:53:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-148567-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 21:04:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 21:02:05 +0000   Wed, 17 Dec 2025 20:56:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 21:02:05 +0000   Wed, 17 Dec 2025 20:56:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 21:02:05 +0000   Wed, 17 Dec 2025 20:56:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 21:02:05 +0000   Wed, 17 Dec 2025 20:56:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-148567-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                f1741bb6-47c6-431c-9bdb-b61180c553d3
	  Boot ID:                    7e62835b-3b3c-4293-85c7-fb0aa46e4d00
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-d5rt7                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 etcd-ha-148567-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-gwspj                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-148567-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-148567-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-cbk47                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-148567-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-148567-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m11s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   CIDRAssignmentFailed     10m                    cidrAllocator    Node ha-148567-m02 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           10m                    node-controller  Node ha-148567-m02 event: Registered Node ha-148567-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-148567-m02 event: Registered Node ha-148567-m02 in Controller
	  Normal   RegisteredNode           9m41s                  node-controller  Node ha-148567-m02 event: Registered Node ha-148567-m02 in Controller
	  Normal   NodeHasSufficientMemory  7m58s (x8 over 7m58s)  kubelet          Node ha-148567-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     7m58s (x8 over 7m58s)  kubelet          Node ha-148567-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    7m58s (x8 over 7m58s)  kubelet          Node ha-148567-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 7m58s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m58s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeNotReady             7m27s                  node-controller  Node ha-148567-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           7m20s                  node-controller  Node ha-148567-m02 event: Registered Node ha-148567-m02 in Controller
	  Normal   Starting                 6m47s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m47s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m46s (x8 over 6m47s)  kubelet          Node ha-148567-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m46s (x8 over 6m47s)  kubelet          Node ha-148567-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m46s (x8 over 6m47s)  kubelet          Node ha-148567-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        5m47s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m55s                  node-controller  Node ha-148567-m02 event: Registered Node ha-148567-m02 in Controller
	  Normal   RegisteredNode           4m51s                  node-controller  Node ha-148567-m02 event: Registered Node ha-148567-m02 in Controller
	  Normal   RegisteredNode           3m47s                  node-controller  Node ha-148567-m02 event: Registered Node ha-148567-m02 in Controller
	
	
	Name:               ha-148567-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-148567-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=ha-148567
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_17T20_55_18_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 20:55:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-148567-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 21:04:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 21:04:02 +0000   Wed, 17 Dec 2025 20:55:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 21:04:02 +0000   Wed, 17 Dec 2025 20:55:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 21:04:02 +0000   Wed, 17 Dec 2025 20:55:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 21:04:02 +0000   Wed, 17 Dec 2025 20:55:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-148567-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                3417648c-3c2b-4e8d-9266-3d162fe27a2f
	  Boot ID:                    7e62835b-3b3c-4293-85c7-fb0aa46e4d00
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-h2kwk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 kindnet-4xxcs               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m52s
	  kube-system                 kube-proxy-9rv8b            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m2s                   kube-proxy       
	  Normal   Starting                 8m49s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    8m52s (x3 over 8m52s)  kubelet          Node ha-148567-m04 status is now: NodeHasNoDiskPressure
	  Normal   CIDRAssignmentFailed     8m52s                  cidrAllocator    Node ha-148567-m04 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientPID     8m52s (x3 over 8m52s)  kubelet          Node ha-148567-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  8m52s (x3 over 8m52s)  kubelet          Node ha-148567-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           8m51s                  node-controller  Node ha-148567-m04 event: Registered Node ha-148567-m04 in Controller
	  Normal   RegisteredNode           8m49s                  node-controller  Node ha-148567-m04 event: Registered Node ha-148567-m04 in Controller
	  Normal   RegisteredNode           8m48s                  node-controller  Node ha-148567-m04 event: Registered Node ha-148567-m04 in Controller
	  Normal   NodeReady                8m37s                  kubelet          Node ha-148567-m04 status is now: NodeReady
	  Normal   RegisteredNode           7m20s                  node-controller  Node ha-148567-m04 event: Registered Node ha-148567-m04 in Controller
	  Normal   RegisteredNode           4m55s                  node-controller  Node ha-148567-m04 event: Registered Node ha-148567-m04 in Controller
	  Normal   RegisteredNode           4m51s                  node-controller  Node ha-148567-m04 event: Registered Node ha-148567-m04 in Controller
	  Normal   Starting                 4m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m21s (x8 over 4m25s)  kubelet          Node ha-148567-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m21s (x8 over 4m25s)  kubelet          Node ha-148567-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m21s (x8 over 4m25s)  kubelet          Node ha-148567-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m47s                  node-controller  Node ha-148567-m04 event: Registered Node ha-148567-m04 in Controller
	
	
	==> dmesg <==
	[Dec17 19:02] hrtimer: interrupt took 71042327 ns
	[Dec17 20:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 20:12] overlayfs: idmapped layers are currently not supported
	[  +0.080288] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec17 20:17] overlayfs: idmapped layers are currently not supported
	[Dec17 20:18] overlayfs: idmapped layers are currently not supported
	[Dec17 20:35] overlayfs: idmapped layers are currently not supported
	[Dec17 20:52] overlayfs: idmapped layers are currently not supported
	[Dec17 20:53] overlayfs: idmapped layers are currently not supported
	[Dec17 20:54] overlayfs: idmapped layers are currently not supported
	[Dec17 20:55] overlayfs: idmapped layers are currently not supported
	[Dec17 20:56] overlayfs: idmapped layers are currently not supported
	[Dec17 20:57] overlayfs: idmapped layers are currently not supported
	[  +3.685974] overlayfs: idmapped layers are currently not supported
	[Dec17 20:59] overlayfs: idmapped layers are currently not supported
	[Dec17 21:00] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7b48eea7424a1e799bb5102aad672e4089e73d5c20382c2df99a7acabddf99d2] <==
	{"level":"info","ts":"2025-12-17T20:59:37.788431Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9"}
	{"level":"info","ts":"2025-12-17T20:59:37.796672Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"1206557d2b7140f9","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-17T20:59:37.796823Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9"}
	{"level":"info","ts":"2025-12-17T20:59:37.820571Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9"}
	{"level":"info","ts":"2025-12-17T20:59:37.821082Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9"}
	{"level":"info","ts":"2025-12-17T21:00:00.564875Z","caller":"traceutil/trace.go:172","msg":"trace[730122254] transaction","detail":"{read_only:false; response_revision:2088; number_of_response:1; }","duration":"130.097535ms","start":"2025-12-17T21:00:00.434758Z","end":"2025-12-17T21:00:00.564855Z","steps":["trace[730122254] 'process raft request'  (duration: 129.96804ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T21:03:59.952398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:39074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:04:00.034876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:39112","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T21:04:00.090400Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892 14484227195305550727)"}
	{"level":"info","ts":"2025-12-17T21:04:00.100282Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"1206557d2b7140f9","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-12-17T21:04:00.100417Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"1206557d2b7140f9"}
	{"level":"warn","ts":"2025-12-17T21:04:00.100571Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1206557d2b7140f9"}
	{"level":"info","ts":"2025-12-17T21:04:00.100638Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1206557d2b7140f9"}
	{"level":"warn","ts":"2025-12-17T21:04:00.100725Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1206557d2b7140f9"}
	{"level":"info","ts":"2025-12-17T21:04:00.107750Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1206557d2b7140f9"}
	{"level":"info","ts":"2025-12-17T21:04:00.107949Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9"}
	{"level":"warn","ts":"2025-12-17T21:04:00.108214Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9","error":"context canceled"}
	{"level":"warn","ts":"2025-12-17T21:04:00.108396Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"1206557d2b7140f9","error":"failed to read 1206557d2b7140f9 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-12-17T21:04:00.108467Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9"}
	{"level":"warn","ts":"2025-12-17T21:04:00.108656Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9","error":"context canceled"}
	{"level":"info","ts":"2025-12-17T21:04:00.108725Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1206557d2b7140f9"}
	{"level":"info","ts":"2025-12-17T21:04:00.108764Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"1206557d2b7140f9"}
	{"level":"info","ts":"2025-12-17T21:04:00.108825Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"1206557d2b7140f9"}
	{"level":"warn","ts":"2025-12-17T21:04:00.262263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:53294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:04:00.262447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:53306","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:04:10 up  3:46,  0 user,  load average: 2.39, 1.63, 1.26
	Linux ha-148567 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [be62aea7ae9e318fdcb21f246614d04dfc3cac7d3871e814ca132ac4ea1af8ab] <==
	I1217 21:03:39.376047       1 main.go:324] Node ha-148567-m03 has CIDR [10.244.2.0/24] 
	I1217 21:03:39.376110       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 21:03:39.376120       1 main.go:324] Node ha-148567-m04 has CIDR [10.244.4.0/24] 
	I1217 21:03:49.375345       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 21:03:49.375380       1 main.go:301] handling current node
	I1217 21:03:49.375396       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 21:03:49.375403       1 main.go:324] Node ha-148567-m02 has CIDR [10.244.1.0/24] 
	I1217 21:03:49.375629       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1217 21:03:49.375644       1 main.go:324] Node ha-148567-m03 has CIDR [10.244.2.0/24] 
	I1217 21:03:49.375716       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 21:03:49.375729       1 main.go:324] Node ha-148567-m04 has CIDR [10.244.4.0/24] 
	I1217 21:03:59.369987       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 21:03:59.370029       1 main.go:301] handling current node
	I1217 21:03:59.370046       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 21:03:59.370052       1 main.go:324] Node ha-148567-m02 has CIDR [10.244.1.0/24] 
	I1217 21:03:59.370194       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1217 21:03:59.370209       1 main.go:324] Node ha-148567-m03 has CIDR [10.244.2.0/24] 
	I1217 21:03:59.370263       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 21:03:59.370272       1 main.go:324] Node ha-148567-m04 has CIDR [10.244.4.0/24] 
	I1217 21:04:09.374313       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 21:04:09.374349       1 main.go:324] Node ha-148567-m04 has CIDR [10.244.4.0/24] 
	I1217 21:04:09.374454       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 21:04:09.374461       1 main.go:301] handling current node
	I1217 21:04:09.374473       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 21:04:09.374478       1 main.go:324] Node ha-148567-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0273f065d6acfc2f5b1353496b1c10bb1409bb5cd6154db0859cb71f3d44d9a6] <==
	{"level":"warn","ts":"2025-12-17T20:58:24.564283Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021445a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564298Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400113da40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564332Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40027cb4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564368Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a52960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564388Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40025d0d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564401Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40020f1860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564436Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40015d2780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564457Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002438780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564472Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002438000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564505Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000b41c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564523Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001611c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564546Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400184a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564563Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40025d03c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564622Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002b6b860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:24.564666Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002e82f00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-17T20:58:26.320355Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000b405a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	{"level":"warn","ts":"2025-12-17T20:58:28.792696Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001eb3860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	E1217 20:58:28.792871       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E1217 20:58:28.792953       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1217 20:58:28.794104       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1217 20:58:28.794148       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1217 20:58:28.795316       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.436444ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-148567" result=null
	F1217 20:58:29.196106       1 hooks.go:204] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	{"level":"warn","ts":"2025-12-17T20:58:29.337612Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000b405a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	{"level":"warn","ts":"2025-12-17T20:58:29.338203Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40020f05a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	
	
	==> kube-apiserver [58f3a197004f5d62632cc80af9bd747bbb630d2255db985a002dcb290b8fec26] <==
	I1217 20:59:07.961776       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 20:59:07.961822       1 aggregator.go:171] initial CRD sync complete...
	I1217 20:59:07.961836       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 20:59:07.961841       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 20:59:07.961854       1 cache.go:39] Caches are synced for autoregister controller
	I1217 20:59:07.968198       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 20:59:07.968812       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 20:59:07.968843       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 20:59:07.968940       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 20:59:07.972528       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 20:59:07.972554       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 20:59:07.981440       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 20:59:08.000665       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	W1217 20:59:08.036152       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1217 20:59:08.037731       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 20:59:08.072711       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1217 20:59:08.083854       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1217 20:59:08.480717       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 20:59:09.161299       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 20:59:09.161407       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 20:59:10.866944       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	W1217 20:59:16.897166       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1217 20:59:59.441759       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 20:59:59.499513       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 21:00:04.809392       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [494e8522562ca32d388131f40ce187010035d61cbc5d6ce5a865333dd850d94e] <==
	I1217 20:59:18.999948       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1217 20:59:19.004874       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 20:59:19.016373       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 20:59:19.016530       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 20:59:19.016388       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 20:59:19.016408       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 20:59:19.017528       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 20:59:19.017569       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1217 20:59:19.017632       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 20:59:19.017945       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1217 20:59:19.021092       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 20:59:19.025507       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 20:59:19.045724       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 20:59:19.045734       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 20:59:19.049978       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 20:59:19.050106       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 20:59:19.050216       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-148567-m04"
	I1217 20:59:19.053320       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 20:59:19.054927       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 20:59:19.059201       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 20:59:19.066898       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 20:59:19.066922       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 20:59:19.066930       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 21:04:02.709143       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-148567-m04"
	E1217 21:04:02.773222       1 garbagecollector.go:360] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"coordination.k8s.io/v1\", Kind:\"Lease\", Name:\"ha-148567-m03\", UID:\"d1898185-b597-4c23-bbf6-5570c72b9a1d\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"kube-node-lease\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{_:sync.noC
opy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-148567-m03\", UID:\"8f703d96-c418-40ea-a314-201a4750b73d\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io \"ha-148567-m03\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [bf8c2f6823453d081c96845039a6901183326d12bd63d0143e1c748f8411177a] <==
	I1217 20:58:23.691524       1 serving.go:386] Generated self-signed cert in-memory
	I1217 20:58:24.303529       1 controllermanager.go:191] "Starting" version="v1.34.3"
	I1217 20:58:24.303556       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:58:24.305033       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1217 20:58:24.305183       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1217 20:58:24.305513       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1217 20:58:24.305602       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1217 20:58:36.324375       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [9c2f443274791cbb739fa32684040efe768b281d3b40f0fdfa1ff15237e0485c] <==
	I1217 20:59:12.356083       1 server_linux.go:53] "Using iptables proxy"
	I1217 20:59:12.461390       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 20:59:12.562037       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 20:59:12.562172       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1217 20:59:12.562308       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 20:59:12.588852       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 20:59:12.588970       1 server_linux.go:132] "Using iptables Proxier"
	I1217 20:59:12.592749       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 20:59:12.593339       1 server.go:527] "Version info" version="v1.34.3"
	I1217 20:59:12.593422       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:59:12.597235       1 config.go:200] "Starting service config controller"
	I1217 20:59:12.597257       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 20:59:12.597270       1 config.go:106] "Starting endpoint slice config controller"
	I1217 20:59:12.597274       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 20:59:12.597299       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 20:59:12.597303       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 20:59:12.598036       1 config.go:309] "Starting node config controller"
	I1217 20:59:12.598091       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 20:59:12.598121       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 20:59:12.698129       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 20:59:12.698169       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 20:59:12.698129       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [055c04d40b9a0b3de2fc113e6e93106a29a67f711d7609c5bdc735d261688c9e] <==
	E1217 20:58:47.994962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 20:58:48.630524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 20:58:48.664886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 20:58:48.798812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 20:58:48.912518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 20:58:49.980501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 20:58:50.118478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 20:58:50.661797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 20:58:51.738458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 20:58:51.833081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 20:58:51.880446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 20:58:52.278361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 20:58:52.705253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 20:58:54.937084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1217 20:59:01.652210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 20:59:02.395235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 20:59:04.543403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 20:59:04.592198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 20:59:04.661603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 20:59:06.231393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 20:59:06.897693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 21:03:56.561109       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-h2kwk\": pod busybox-7b57f96db7-h2kwk is already assigned to node \"ha-148567-m04\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-h2kwk" node="ha-148567-m04"
	E1217 21:03:56.561191       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 2715f0e4-6474-4d30-b0e9-7360c6bff046(default/busybox-7b57f96db7-h2kwk) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-h2kwk"
	E1217 21:03:56.561217       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-h2kwk\": pod busybox-7b57f96db7-h2kwk is already assigned to node \"ha-148567-m04\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-h2kwk"
	I1217 21:03:56.562999       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-h2kwk" node="ha-148567-m04"
	
	
	==> kubelet <==
	Dec 17 20:59:08 ha-148567 kubelet[848]: E1217 20:59:08.852661     848 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:08 ha-148567 kubelet[848]: E1217 20:59:08.852843     848 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/775371d6-bc7d-40f6-8e0f-655f265828ba-kube-proxy podName:775371d6-bc7d-40f6-8e0f-655f265828ba nodeName:}" failed. No retries permitted until 2025-12-17 20:59:09.352821731 +0000 UTC m=+110.333368718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/775371d6-bc7d-40f6-8e0f-655f265828ba-kube-proxy") pod "kube-proxy-9n5cb" (UID: "775371d6-bc7d-40f6-8e0f-655f265828ba") : failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:08 ha-148567 kubelet[848]: E1217 20:59:08.855111     848 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:08 ha-148567 kubelet[848]: E1217 20:59:08.855194     848 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4fbbda83-a7d2-41c0-98ea-066d493cd483-config-volume podName:4fbbda83-a7d2-41c0-98ea-066d493cd483 nodeName:}" failed. No retries permitted until 2025-12-17 20:59:09.355175868 +0000 UTC m=+110.335722855 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4fbbda83-a7d2-41c0-98ea-066d493cd483-config-volume") pod "coredns-66bc5c9577-wgcmx" (UID: "4fbbda83-a7d2-41c0-98ea-066d493cd483") : failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:08 ha-148567 kubelet[848]: W1217 20:59:08.873868     848 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08/crio-889abb571076ec1220928e45789d843337a05ee99ef9673a28c2a3c540b7021c WatchSource:0}: Error finding container 889abb571076ec1220928e45789d843337a05ee99ef9673a28c2a3c540b7021c: Status 404 returned error can't find the container with id 889abb571076ec1220928e45789d843337a05ee99ef9673a28c2a3c540b7021c
	Dec 17 20:59:09 ha-148567 kubelet[848]: W1217 20:59:09.068519     848 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08/crio-48e89d3a90ce3d775524a935bca2163af9eb88b51ddefe58bdd80f6e131fc019 WatchSource:0}: Error finding container 48e89d3a90ce3d775524a935bca2163af9eb88b51ddefe58bdd80f6e131fc019: Status 404 returned error can't find the container with id 48e89d3a90ce3d775524a935bca2163af9eb88b51ddefe58bdd80f6e131fc019
	Dec 17 20:59:10 ha-148567 kubelet[848]: E1217 20:59:10.374815     848 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:10 ha-148567 kubelet[848]: E1217 20:59:10.375407     848 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/775371d6-bc7d-40f6-8e0f-655f265828ba-kube-proxy podName:775371d6-bc7d-40f6-8e0f-655f265828ba nodeName:}" failed. No retries permitted until 2025-12-17 20:59:11.375385528 +0000 UTC m=+112.355932507 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/775371d6-bc7d-40f6-8e0f-655f265828ba-kube-proxy") pod "kube-proxy-9n5cb" (UID: "775371d6-bc7d-40f6-8e0f-655f265828ba") : failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:10 ha-148567 kubelet[848]: E1217 20:59:10.375315     848 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:10 ha-148567 kubelet[848]: E1217 20:59:10.375700     848 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db-config-volume podName:e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db nodeName:}" failed. No retries permitted until 2025-12-17 20:59:11.375685929 +0000 UTC m=+112.356232908 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db-config-volume") pod "coredns-66bc5c9577-l8xqv" (UID: "e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db") : failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:10 ha-148567 kubelet[848]: E1217 20:59:10.375332     848 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:10 ha-148567 kubelet[848]: E1217 20:59:10.375880     848 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4fbbda83-a7d2-41c0-98ea-066d493cd483-config-volume podName:4fbbda83-a7d2-41c0-98ea-066d493cd483 nodeName:}" failed. No retries permitted until 2025-12-17 20:59:11.375869357 +0000 UTC m=+112.356416344 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4fbbda83-a7d2-41c0-98ea-066d493cd483-config-volume") pod "coredns-66bc5c9577-wgcmx" (UID: "4fbbda83-a7d2-41c0-98ea-066d493cd483") : failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:12 ha-148567 kubelet[848]: I1217 20:59:12.229115     848 kubelet.go:3203] "Trying to delete pod" pod="kube-system/kube-vip-ha-148567" podUID="21e88703-e2ca-4f7a-b29b-995460537681"
	Dec 17 20:59:12 ha-148567 kubelet[848]: I1217 20:59:12.264656     848 kubelet.go:3209] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-148567"
	Dec 17 20:59:12 ha-148567 kubelet[848]: I1217 20:59:12.264831     848 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-148567"
	Dec 17 20:59:12 ha-148567 kubelet[848]: E1217 20:59:12.388846     848 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:12 ha-148567 kubelet[848]: E1217 20:59:12.389130     848 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4fbbda83-a7d2-41c0-98ea-066d493cd483-config-volume podName:4fbbda83-a7d2-41c0-98ea-066d493cd483 nodeName:}" failed. No retries permitted until 2025-12-17 20:59:14.389106831 +0000 UTC m=+115.369653810 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4fbbda83-a7d2-41c0-98ea-066d493cd483-config-volume") pod "coredns-66bc5c9577-wgcmx" (UID: "4fbbda83-a7d2-41c0-98ea-066d493cd483") : failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:12 ha-148567 kubelet[848]: E1217 20:59:12.389030     848 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:12 ha-148567 kubelet[848]: E1217 20:59:12.389728     848 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db-config-volume podName:e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db nodeName:}" failed. No retries permitted until 2025-12-17 20:59:14.389710367 +0000 UTC m=+115.370257354 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db-config-volume") pod "coredns-66bc5c9577-l8xqv" (UID: "e3a7d90f-a4fd-4393-a20a-d49f8a8aa0db") : failed to sync configmap cache: timed out waiting for the condition
	Dec 17 20:59:13 ha-148567 kubelet[848]: I1217 20:59:13.384524     848 kubelet.go:3203] "Trying to delete pod" pod="kube-system/kube-vip-ha-148567" podUID="21e88703-e2ca-4f7a-b29b-995460537681"
	Dec 17 20:59:14 ha-148567 kubelet[848]: W1217 20:59:14.589657     848 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08/crio-bb9c16be2a1aedd858d979ae15e9146ad064db8d985a6ceb59a72082bfd3a89a WatchSource:0}: Error finding container bb9c16be2a1aedd858d979ae15e9146ad064db8d985a6ceb59a72082bfd3a89a: Status 404 returned error can't find the container with id bb9c16be2a1aedd858d979ae15e9146ad064db8d985a6ceb59a72082bfd3a89a
	Dec 17 20:59:14 ha-148567 kubelet[848]: W1217 20:59:14.594356     848 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08/crio-929945e3d1a3e1046f86b0d613851b79f1be5ddf4950f580b22b0757f6bb7e06 WatchSource:0}: Error finding container 929945e3d1a3e1046f86b0d613851b79f1be5ddf4950f580b22b0757f6bb7e06: Status 404 returned error can't find the container with id 929945e3d1a3e1046f86b0d613851b79f1be5ddf4950f580b22b0757f6bb7e06
	Dec 17 20:59:15 ha-148567 kubelet[848]: I1217 20:59:15.598263     848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-148567" podStartSLOduration=3.598244931 podStartE2EDuration="3.598244931s" podCreationTimestamp="2025-12-17 20:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 20:59:15.579683206 +0000 UTC m=+116.560230185" watchObservedRunningTime="2025-12-17 20:59:15.598244931 +0000 UTC m=+116.578791910"
	Dec 17 20:59:19 ha-148567 kubelet[848]: E1217 20:59:19.254429     848 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/53294b9279ebf263ed0cda5812f1ad589db804f07fe163bc838196d6b45a0fcc/diff" to get inode usage: stat /var/lib/containers/storage/overlay/53294b9279ebf263ed0cda5812f1ad589db804f07fe163bc838196d6b45a0fcc/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-148567_275f9236d45449f9c15b78cd0e1552cb/kube-controller-manager/2.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-148567_275f9236d45449f9c15b78cd0e1552cb/kube-controller-manager/2.log: no such file or directory
	Dec 17 20:59:39 ha-148567 kubelet[848]: I1217 20:59:39.626838     848 scope.go:117] "RemoveContainer" containerID="36cc5a99d5800e41730be4a25115863b86a6455bd50f1d620bffa86d7a25ea3d"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-148567 -n ha-148567
helpers_test.go:270: (dbg) Run:  kubectl --context ha-148567 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.3s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-150253 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-150253 --output=json --user=testUser: exit status 80 (2.300684607s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a1b853d9-c26d-410a-99f5-cf135b0a0e78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-150253 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"50e81b6c-1ffc-4139-8801-dc64bddbfff5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-17T21:08:48Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"7c8e33fc-8676-43c4-87a4-753c83d3e8c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-150253 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.30s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-150253 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-150253 --output=json --user=testUser: exit status 80 (1.57122845s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c1c86fcc-d61d-46a6-9f92-1b481a0cf221","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-150253 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"37399177-0587-4dd6-b295-6e4ba7614ce7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-17T21:08:49Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"50eaf095-8c1e-44d6-93cf-2d6fe8a20b1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-150253 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.57s)

                                                
                                    
x
+
TestKubernetesUpgrade (785.71s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-342357 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1217 21:25:30.853788  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-342357 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.648783351s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-342357
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-342357: (1.515958557s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-342357 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-342357 status --format={{.Host}}: exit status 7 (82.011631ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-342357 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-342357 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m23.736212157s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-342357] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-342357" primary control-plane node in "kubernetes-upgrade-342357" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 21:26:01.417518  666795 out.go:360] Setting OutFile to fd 1 ...
	I1217 21:26:01.417635  666795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:26:01.417645  666795 out.go:374] Setting ErrFile to fd 2...
	I1217 21:26:01.417651  666795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:26:01.417903  666795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 21:26:01.418273  666795 out.go:368] Setting JSON to false
	I1217 21:26:01.419171  666795 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14911,"bootTime":1765991851,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 21:26:01.419245  666795 start.go:143] virtualization:  
	I1217 21:26:01.422333  666795 out.go:179] * [kubernetes-upgrade-342357] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 21:26:01.426290  666795 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 21:26:01.426440  666795 notify.go:221] Checking for updates...
	I1217 21:26:01.432119  666795 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 21:26:01.435105  666795 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 21:26:01.437952  666795 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 21:26:01.440891  666795 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 21:26:01.443774  666795 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 21:26:01.447228  666795 config.go:182] Loaded profile config "kubernetes-upgrade-342357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 21:26:01.448037  666795 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 21:26:01.481103  666795 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 21:26:01.481267  666795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 21:26:01.552356  666795 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 21:26:01.543102707 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 21:26:01.552462  666795 docker.go:319] overlay module found
	I1217 21:26:01.557272  666795 out.go:179] * Using the docker driver based on existing profile
	I1217 21:26:01.560148  666795 start.go:309] selected driver: docker
	I1217 21:26:01.560176  666795 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-342357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-342357 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 21:26:01.560265  666795 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 21:26:01.560993  666795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 21:26:01.621924  666795 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 21:26:01.612357358 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 21:26:01.622261  666795 cni.go:84] Creating CNI manager for ""
	I1217 21:26:01.622325  666795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 21:26:01.622368  666795 start.go:353] cluster config:
	{Name:kubernetes-upgrade-342357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-342357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 21:26:01.625635  666795 out.go:179] * Starting "kubernetes-upgrade-342357" primary control-plane node in "kubernetes-upgrade-342357" cluster
	I1217 21:26:01.628415  666795 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 21:26:01.631214  666795 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 21:26:01.634190  666795 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 21:26:01.634242  666795 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1217 21:26:01.634253  666795 cache.go:65] Caching tarball of preloaded images
	I1217 21:26:01.634291  666795 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 21:26:01.634359  666795 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 21:26:01.634370  666795 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 21:26:01.634477  666795 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/kubernetes-upgrade-342357/config.json ...
	I1217 21:26:01.655217  666795 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 21:26:01.655242  666795 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 21:26:01.655263  666795 cache.go:243] Successfully downloaded all kic artifacts
	I1217 21:26:01.655313  666795 start.go:360] acquireMachinesLock for kubernetes-upgrade-342357: {Name:mkdb329d5953db2c13603c9a2465a33fd3e29d9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 21:26:01.655385  666795 start.go:364] duration metric: took 47.738µs to acquireMachinesLock for "kubernetes-upgrade-342357"
	I1217 21:26:01.655414  666795 start.go:96] Skipping create...Using existing machine configuration
	I1217 21:26:01.655429  666795 fix.go:54] fixHost starting: 
	I1217 21:26:01.655733  666795 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-342357 --format={{.State.Status}}
	I1217 21:26:01.673228  666795 fix.go:112] recreateIfNeeded on kubernetes-upgrade-342357: state=Stopped err=<nil>
	W1217 21:26:01.673260  666795 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 21:26:01.676569  666795 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-342357" ...
	I1217 21:26:01.676664  666795 cli_runner.go:164] Run: docker start kubernetes-upgrade-342357
	I1217 21:26:01.943741  666795 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-342357 --format={{.State.Status}}
	I1217 21:26:01.967094  666795 kic.go:430] container "kubernetes-upgrade-342357" state is running.
	I1217 21:26:01.967562  666795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-342357
	I1217 21:26:01.992819  666795 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/kubernetes-upgrade-342357/config.json ...
	I1217 21:26:01.993143  666795 machine.go:94] provisionDockerMachine start ...
	I1217 21:26:01.993235  666795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-342357
	I1217 21:26:02.020152  666795 main.go:143] libmachine: Using SSH client type: native
	I1217 21:26:02.020747  666795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1217 21:26:02.020770  666795 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 21:26:02.021597  666795 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 21:26:05.155238  666795 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-342357
	
	I1217 21:26:05.155263  666795 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-342357"
	I1217 21:26:05.155327  666795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-342357
	I1217 21:26:05.173841  666795 main.go:143] libmachine: Using SSH client type: native
	I1217 21:26:05.174156  666795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1217 21:26:05.174183  666795 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-342357 && echo "kubernetes-upgrade-342357" | sudo tee /etc/hostname
	I1217 21:26:05.336499  666795 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-342357
	
	I1217 21:26:05.336578  666795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-342357
	I1217 21:26:05.360166  666795 main.go:143] libmachine: Using SSH client type: native
	I1217 21:26:05.360485  666795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1217 21:26:05.360503  666795 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-342357' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-342357/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-342357' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 21:26:05.533042  666795 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 21:26:05.533120  666795 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 21:26:05.533189  666795 ubuntu.go:190] setting up certificates
	I1217 21:26:05.533217  666795 provision.go:84] configureAuth start
	I1217 21:26:05.533317  666795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-342357
	I1217 21:26:05.555041  666795 provision.go:143] copyHostCerts
	I1217 21:26:05.555128  666795 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 21:26:05.555139  666795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 21:26:05.555213  666795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 21:26:05.555314  666795 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 21:26:05.555320  666795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 21:26:05.555347  666795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 21:26:05.555396  666795 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 21:26:05.555400  666795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 21:26:05.555422  666795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 21:26:05.555464  666795 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-342357 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-342357 localhost minikube]
	I1217 21:26:05.679635  666795 provision.go:177] copyRemoteCerts
	I1217 21:26:05.679767  666795 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 21:26:05.679891  666795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-342357
	I1217 21:26:05.713200  666795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/kubernetes-upgrade-342357/id_rsa Username:docker}
	I1217 21:26:05.815213  666795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 21:26:05.842827  666795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1217 21:26:05.874734  666795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 21:26:05.902575  666795 provision.go:87] duration metric: took 369.326701ms to configureAuth
	I1217 21:26:05.902684  666795 ubuntu.go:206] setting minikube options for container-runtime
	I1217 21:26:05.902983  666795 config.go:182] Loaded profile config "kubernetes-upgrade-342357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 21:26:05.903164  666795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-342357
	I1217 21:26:05.929998  666795 main.go:143] libmachine: Using SSH client type: native
	I1217 21:26:05.930364  666795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1217 21:26:05.930379  666795 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 21:26:06.322869  666795 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 21:26:06.322976  666795 machine.go:97] duration metric: took 4.329815088s to provisionDockerMachine
	I1217 21:26:06.322992  666795 start.go:293] postStartSetup for "kubernetes-upgrade-342357" (driver="docker")
	I1217 21:26:06.323005  666795 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 21:26:06.323078  666795 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 21:26:06.323128  666795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-342357
	I1217 21:26:06.350418  666795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/kubernetes-upgrade-342357/id_rsa Username:docker}
	I1217 21:26:06.459637  666795 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 21:26:06.463196  666795 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 21:26:06.463227  666795 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 21:26:06.463239  666795 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 21:26:06.463291  666795 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 21:26:06.463384  666795 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 21:26:06.463488  666795 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 21:26:06.471958  666795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 21:26:06.496137  666795 start.go:296] duration metric: took 173.132601ms for postStartSetup
	I1217 21:26:06.496232  666795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 21:26:06.496274  666795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-342357
	I1217 21:26:06.515472  666795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/kubernetes-upgrade-342357/id_rsa Username:docker}
	I1217 21:26:06.621078  666795 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 21:26:06.628262  666795 fix.go:56] duration metric: took 4.972832285s for fixHost
	I1217 21:26:06.628292  666795 start.go:83] releasing machines lock for "kubernetes-upgrade-342357", held for 4.972891543s
	I1217 21:26:06.628366  666795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-342357
	I1217 21:26:06.648756  666795 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 21:26:06.648815  666795 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 21:26:06.648833  666795 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 21:26:06.648871  666795 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 21:26:06.648901  666795 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 21:26:06.648937  666795 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 21:26:06.648989  666795 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 21:26:06.649060  666795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 21:26:06.649129  666795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-342357
	I1217 21:26:06.670567  666795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/kubernetes-upgrade-342357/id_rsa Username:docker}
	I1217 21:26:06.783857  666795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 21:26:06.805261  666795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 21:26:06.824303  666795 ssh_runner.go:195] Run: openssl version
	I1217 21:26:06.830846  666795 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 21:26:06.845389  666795 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 21:26:06.853648  666795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 21:26:06.857629  666795 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 21:26:06.857741  666795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 21:26:06.898760  666795 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 21:26:06.907207  666795 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:26:06.930361  666795 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 21:26:06.938636  666795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:26:06.942564  666795 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:26:06.942638  666795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:26:07.006042  666795 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 21:26:07.015471  666795 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 21:26:07.025383  666795 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 21:26:07.035289  666795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 21:26:07.043187  666795 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 21:26:07.043263  666795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 21:26:07.106323  666795 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 21:26:07.117222  666795 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 21:26:07.120934  666795 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 21:26:07.124765  666795 ssh_runner.go:195] Run: cat /version.json
	I1217 21:26:07.124848  666795 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 21:26:07.235110  666795 ssh_runner.go:195] Run: systemctl --version
	I1217 21:26:07.244031  666795 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 21:26:07.285917  666795 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 21:26:07.290678  666795 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 21:26:07.290751  666795 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 21:26:07.299337  666795 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 21:26:07.299359  666795 start.go:496] detecting cgroup driver to use...
	I1217 21:26:07.299390  666795 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 21:26:07.299489  666795 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 21:26:07.324281  666795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 21:26:07.345890  666795 docker.go:218] disabling cri-docker service (if available) ...
	I1217 21:26:07.345951  666795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 21:26:07.364282  666795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 21:26:07.379223  666795 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 21:26:07.546248  666795 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 21:26:07.722207  666795 docker.go:234] disabling docker service ...
	I1217 21:26:07.722280  666795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 21:26:07.738574  666795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 21:26:07.752526  666795 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 21:26:07.980839  666795 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 21:26:08.268849  666795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 21:26:08.291013  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 21:26:08.327852  666795 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 21:26:08.327913  666795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:26:08.343816  666795 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 21:26:08.343886  666795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:26:08.354185  666795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:26:08.369295  666795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:26:08.381866  666795 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 21:26:08.396126  666795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:26:08.417131  666795 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:26:08.433907  666795 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:26:08.453127  666795 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 21:26:08.465693  666795 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 21:26:08.474125  666795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 21:26:08.648675  666795 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 21:26:08.843394  666795 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 21:26:08.843460  666795 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 21:26:08.847424  666795 start.go:564] Will wait 60s for crictl version
	I1217 21:26:08.847494  666795 ssh_runner.go:195] Run: which crictl
	I1217 21:26:08.851078  666795 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 21:26:08.875265  666795 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 21:26:08.875347  666795 ssh_runner.go:195] Run: crio --version
	I1217 21:26:08.906209  666795 ssh_runner.go:195] Run: crio --version
	I1217 21:26:08.942521  666795 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 21:26:08.945402  666795 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-342357 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 21:26:08.962675  666795 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 21:26:08.966714  666795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 21:26:08.976842  666795 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-342357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-342357 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 21:26:08.976971  666795 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 21:26:08.977034  666795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 21:26:09.008900  666795 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1217 21:26:09.008986  666795 ssh_runner.go:195] Run: which lz4
	I1217 21:26:09.012998  666795 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1217 21:26:09.016855  666795 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1217 21:26:09.016891  666795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (306154261 bytes)
	I1217 21:26:10.830002  666795 crio.go:462] duration metric: took 1.817044816s to copy over tarball
	I1217 21:26:10.830120  666795 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1217 21:26:12.869366  666795 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.039196137s)
	I1217 21:26:12.869394  666795 crio.go:469] duration metric: took 2.039321249s to extract the tarball
	I1217 21:26:12.869402  666795 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1217 21:26:12.940076  666795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 21:26:12.983868  666795 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 21:26:12.983901  666795 cache_images.go:86] Images are preloaded, skipping loading
	I1217 21:26:12.983909  666795 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 crio true true} ...
	I1217 21:26:12.984028  666795 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-342357 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-342357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 21:26:12.984117  666795 ssh_runner.go:195] Run: crio config
	I1217 21:26:13.074811  666795 cni.go:84] Creating CNI manager for ""
	I1217 21:26:13.074850  666795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 21:26:13.074861  666795 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 21:26:13.074890  666795 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-342357 NodeName:kubernetes-upgrade-342357 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 21:26:13.075047  666795 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-342357"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 21:26:13.075126  666795 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 21:26:13.084349  666795 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 21:26:13.084471  666795 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 21:26:13.092370  666795 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1217 21:26:13.115193  666795 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 21:26:13.137962  666795 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1217 21:26:13.154951  666795 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 21:26:13.159532  666795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 21:26:13.171019  666795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 21:26:13.304351  666795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 21:26:13.325617  666795 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/kubernetes-upgrade-342357 for IP: 192.168.85.2
	I1217 21:26:13.325636  666795 certs.go:195] generating shared ca certs ...
	I1217 21:26:13.325652  666795 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 21:26:13.325819  666795 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 21:26:13.325864  666795 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 21:26:13.325871  666795 certs.go:257] generating profile certs ...
	I1217 21:26:13.325955  666795 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/kubernetes-upgrade-342357/client.key
	I1217 21:26:13.326018  666795 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/kubernetes-upgrade-342357/apiserver.key.b3d49d50
	I1217 21:26:13.326058  666795 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/kubernetes-upgrade-342357/proxy-client.key
	I1217 21:26:13.326172  666795 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 21:26:13.326204  666795 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 21:26:13.326212  666795 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 21:26:13.326240  666795 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 21:26:13.326273  666795 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 21:26:13.326306  666795 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 21:26:13.326359  666795 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 21:26:13.326982  666795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 21:26:13.358058  666795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 21:26:13.378647  666795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 21:26:13.403265  666795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 21:26:13.442750  666795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/kubernetes-upgrade-342357/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1217 21:26:13.472446  666795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/kubernetes-upgrade-342357/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 21:26:13.491095  666795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/kubernetes-upgrade-342357/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 21:26:13.511001  666795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/kubernetes-upgrade-342357/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 21:26:13.530846  666795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 21:26:13.554844  666795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 21:26:13.579090  666795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 21:26:13.599676  666795 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 21:26:13.614981  666795 ssh_runner.go:195] Run: openssl version
	I1217 21:26:13.622697  666795 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:26:13.632054  666795 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 21:26:13.640814  666795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:26:13.644800  666795 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:26:13.644915  666795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:26:13.686245  666795 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 21:26:13.693866  666795 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 21:26:13.701983  666795 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 21:26:13.710841  666795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 21:26:13.714913  666795 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 21:26:13.714989  666795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 21:26:13.757903  666795 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 21:26:13.766823  666795 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 21:26:13.780831  666795 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 21:26:13.789723  666795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 21:26:13.794395  666795 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 21:26:13.794515  666795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 21:26:13.839130  666795 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 21:26:13.852068  666795 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 21:26:13.856737  666795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 21:26:13.902494  666795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 21:26:13.956499  666795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 21:26:14.009224  666795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 21:26:14.053146  666795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 21:26:14.108091  666795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 21:26:14.159797  666795 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-342357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-342357 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 21:26:14.159895  666795 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 21:26:14.159979  666795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 21:26:14.190079  666795 cri.go:89] found id: ""
	I1217 21:26:14.190158  666795 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 21:26:14.199121  666795 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 21:26:14.199148  666795 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 21:26:14.199203  666795 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 21:26:14.207563  666795 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 21:26:14.208093  666795 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-342357" does not appear in /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 21:26:14.208243  666795 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-485134/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-342357" cluster setting kubeconfig missing "kubernetes-upgrade-342357" context setting]
	I1217 21:26:14.208531  666795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/kubeconfig: {Name:mkc7214551fa855d6d4575d627c3ecd54291b351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 21:26:14.209066  666795 kapi.go:59] client config for kubernetes-upgrade-342357: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/kubernetes-upgrade-342357/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/kubernetes-upgrade-342357/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 21:26:14.209565  666795 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 21:26:14.209586  666795 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 21:26:14.209593  666795 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 21:26:14.209597  666795 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 21:26:14.209602  666795 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 21:26:14.209905  666795 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 21:26:14.220754  666795 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 21:25:39.829177368 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 21:26:13.149347954 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.85.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-342357"
	   kubeletExtraArgs:
	-    node-ip: 192.168.85.2
	+    - name: "node-ip"
	+      value: "192.168.85.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-rc.1
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1217 21:26:14.220785  666795 kubeadm.go:1161] stopping kube-system containers ...
	I1217 21:26:14.220798  666795 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1217 21:26:14.220856  666795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 21:26:14.268713  666795 cri.go:89] found id: ""
	I1217 21:26:14.268785  666795 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 21:26:14.287076  666795 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 21:26:14.303258  666795 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec 17 21:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec 17 21:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 17 21:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 17 21:25 /etc/kubernetes/scheduler.conf
	
	I1217 21:26:14.303333  666795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 21:26:14.322071  666795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 21:26:14.339310  666795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 21:26:14.349667  666795 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 21:26:14.349736  666795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 21:26:14.357934  666795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 21:26:14.366400  666795 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 21:26:14.366474  666795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 21:26:14.374325  666795 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 21:26:14.382308  666795 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 21:26:14.441985  666795 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 21:26:16.290716  666795 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.848699313s)
	I1217 21:26:16.290782  666795 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 21:26:16.579337  666795 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 21:26:16.672965  666795 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 21:26:16.747765  666795 api_server.go:52] waiting for apiserver process to appear ...
	I1217 21:26:16.747855  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:17.249578  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:17.748350  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:18.248814  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:18.748805  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:19.248027  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:19.748662  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:20.247979  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:20.748064  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:21.248937  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:21.748832  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:22.248420  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:22.748698  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:23.248544  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:23.748805  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:24.249086  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:24.748715  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:25.248785  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:25.748880  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:26.247988  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:26.748785  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:27.248371  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:27.748203  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:28.248735  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:28.748429  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:29.247990  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:29.748830  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:30.248686  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:30.748282  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:31.248791  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:31.749438  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:32.247992  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:32.748416  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:33.247991  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:33.748700  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:34.248920  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:34.748811  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:35.248067  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:35.749576  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:36.248478  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:36.748612  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:37.248810  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:37.747927  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:38.248677  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:38.748058  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:39.248843  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:39.748258  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:40.248008  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:40.748596  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:41.247991  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:41.748445  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:42.248054  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:42.747989  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:43.248034  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:43.748711  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:44.248516  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:44.747984  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:45.248803  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:45.748563  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:46.248085  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:46.748754  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:47.248065  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:47.748058  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:48.248008  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:48.748025  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:49.248865  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:49.748185  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:50.248020  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:50.748724  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:51.248062  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:51.748498  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:52.247990  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:52.748506  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:53.248578  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:53.747970  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:54.248507  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:54.747984  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:55.248578  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:55.748437  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:56.247986  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:56.748782  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:57.248732  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:57.747977  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:58.247985  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:58.747978  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:59.248589  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:26:59.748622  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:00.248847  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:00.747994  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:01.247998  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:01.748680  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:02.248825  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:02.747919  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:03.248040  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:03.748250  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:04.248888  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:04.748802  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:05.247955  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:05.747935  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:06.248817  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:06.748798  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:07.248756  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:07.748773  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:08.248788  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:08.747971  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:09.248000  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:09.748540  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:10.247978  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:10.747993  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:11.247966  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:11.747990  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:12.248617  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:12.748789  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:13.248731  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:13.747978  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:14.248847  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:14.748296  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:15.248741  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:15.748577  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:16.248742  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:16.748784  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:27:16.748882  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:27:16.816608  666795 cri.go:89] found id: ""
	I1217 21:27:16.816632  666795 logs.go:282] 0 containers: []
	W1217 21:27:16.816641  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:27:16.816647  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:27:16.816705  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:27:16.883110  666795 cri.go:89] found id: ""
	I1217 21:27:16.883132  666795 logs.go:282] 0 containers: []
	W1217 21:27:16.883140  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:27:16.883147  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:27:16.883206  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:27:16.917834  666795 cri.go:89] found id: ""
	I1217 21:27:16.917858  666795 logs.go:282] 0 containers: []
	W1217 21:27:16.917866  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:27:16.917872  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:27:16.917928  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:27:16.949809  666795 cri.go:89] found id: ""
	I1217 21:27:16.949832  666795 logs.go:282] 0 containers: []
	W1217 21:27:16.949841  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:27:16.949847  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:27:16.949906  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:27:16.985652  666795 cri.go:89] found id: ""
	I1217 21:27:16.985673  666795 logs.go:282] 0 containers: []
	W1217 21:27:16.985682  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:27:16.985687  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:27:16.985746  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:27:17.016977  666795 cri.go:89] found id: ""
	I1217 21:27:17.017000  666795 logs.go:282] 0 containers: []
	W1217 21:27:17.017008  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:27:17.017015  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:27:17.017072  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:27:17.049705  666795 cri.go:89] found id: ""
	I1217 21:27:17.049727  666795 logs.go:282] 0 containers: []
	W1217 21:27:17.049736  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:27:17.049742  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:27:17.049806  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:27:17.106428  666795 cri.go:89] found id: ""
	I1217 21:27:17.106536  666795 logs.go:282] 0 containers: []
	W1217 21:27:17.106567  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:27:17.106624  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:27:17.106652  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:27:17.190032  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:27:17.190056  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:27:17.274322  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:27:17.274403  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:27:17.294716  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:27:17.294741  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:27:17.704321  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:27:17.704344  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:27:17.704358  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:27:20.234861  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:20.245075  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:27:20.245142  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:27:20.270241  666795 cri.go:89] found id: ""
	I1217 21:27:20.270263  666795 logs.go:282] 0 containers: []
	W1217 21:27:20.270271  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:27:20.270278  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:27:20.270335  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:27:20.301383  666795 cri.go:89] found id: ""
	I1217 21:27:20.301406  666795 logs.go:282] 0 containers: []
	W1217 21:27:20.301414  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:27:20.301420  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:27:20.301494  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:27:20.330280  666795 cri.go:89] found id: ""
	I1217 21:27:20.330306  666795 logs.go:282] 0 containers: []
	W1217 21:27:20.330316  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:27:20.330322  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:27:20.330380  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:27:20.356483  666795 cri.go:89] found id: ""
	I1217 21:27:20.356509  666795 logs.go:282] 0 containers: []
	W1217 21:27:20.356520  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:27:20.356526  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:27:20.356592  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:27:20.382595  666795 cri.go:89] found id: ""
	I1217 21:27:20.382621  666795 logs.go:282] 0 containers: []
	W1217 21:27:20.382631  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:27:20.382637  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:27:20.382699  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:27:20.408846  666795 cri.go:89] found id: ""
	I1217 21:27:20.408870  666795 logs.go:282] 0 containers: []
	W1217 21:27:20.408879  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:27:20.408885  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:27:20.408944  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:27:20.436282  666795 cri.go:89] found id: ""
	I1217 21:27:20.436305  666795 logs.go:282] 0 containers: []
	W1217 21:27:20.436315  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:27:20.436321  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:27:20.436383  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:27:20.461513  666795 cri.go:89] found id: ""
	I1217 21:27:20.461582  666795 logs.go:282] 0 containers: []
	W1217 21:27:20.461597  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:27:20.461608  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:27:20.461620  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:27:20.492146  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:27:20.492172  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:27:20.560085  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:27:20.560125  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:27:20.576391  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:27:20.576422  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:27:20.646695  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:27:20.646719  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:27:20.646734  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:27:23.180541  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:23.190485  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:27:23.190557  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:27:23.219665  666795 cri.go:89] found id: ""
	I1217 21:27:23.219690  666795 logs.go:282] 0 containers: []
	W1217 21:27:23.219700  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:27:23.219706  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:27:23.219776  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:27:23.246468  666795 cri.go:89] found id: ""
	I1217 21:27:23.246544  666795 logs.go:282] 0 containers: []
	W1217 21:27:23.246560  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:27:23.246568  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:27:23.246626  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:27:23.274113  666795 cri.go:89] found id: ""
	I1217 21:27:23.274136  666795 logs.go:282] 0 containers: []
	W1217 21:27:23.274146  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:27:23.274153  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:27:23.274220  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:27:23.300380  666795 cri.go:89] found id: ""
	I1217 21:27:23.300407  666795 logs.go:282] 0 containers: []
	W1217 21:27:23.300416  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:27:23.300423  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:27:23.300532  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:27:23.325342  666795 cri.go:89] found id: ""
	I1217 21:27:23.325365  666795 logs.go:282] 0 containers: []
	W1217 21:27:23.325374  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:27:23.325380  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:27:23.325458  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:27:23.350327  666795 cri.go:89] found id: ""
	I1217 21:27:23.350348  666795 logs.go:282] 0 containers: []
	W1217 21:27:23.350357  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:27:23.350363  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:27:23.350423  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:27:23.375864  666795 cri.go:89] found id: ""
	I1217 21:27:23.375885  666795 logs.go:282] 0 containers: []
	W1217 21:27:23.375894  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:27:23.375900  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:27:23.375958  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:27:23.405981  666795 cri.go:89] found id: ""
	I1217 21:27:23.406004  666795 logs.go:282] 0 containers: []
	W1217 21:27:23.406012  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:27:23.406020  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:27:23.406031  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:27:23.473968  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:27:23.474006  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:27:23.492037  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:27:23.492064  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:27:23.568953  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:27:23.568973  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:27:23.568986  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:27:23.599958  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:27:23.599995  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:27:26.131727  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:26.141858  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:27:26.141937  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:27:26.168690  666795 cri.go:89] found id: ""
	I1217 21:27:26.168716  666795 logs.go:282] 0 containers: []
	W1217 21:27:26.168726  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:27:26.168732  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:27:26.168792  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:27:26.195253  666795 cri.go:89] found id: ""
	I1217 21:27:26.195284  666795 logs.go:282] 0 containers: []
	W1217 21:27:26.195293  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:27:26.195300  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:27:26.195362  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:27:26.220701  666795 cri.go:89] found id: ""
	I1217 21:27:26.220725  666795 logs.go:282] 0 containers: []
	W1217 21:27:26.220733  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:27:26.220740  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:27:26.220805  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:27:26.245302  666795 cri.go:89] found id: ""
	I1217 21:27:26.245325  666795 logs.go:282] 0 containers: []
	W1217 21:27:26.245334  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:27:26.245340  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:27:26.245399  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:27:26.270562  666795 cri.go:89] found id: ""
	I1217 21:27:26.270588  666795 logs.go:282] 0 containers: []
	W1217 21:27:26.270597  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:27:26.270603  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:27:26.270662  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:27:26.296823  666795 cri.go:89] found id: ""
	I1217 21:27:26.296850  666795 logs.go:282] 0 containers: []
	W1217 21:27:26.296859  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:27:26.296866  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:27:26.296921  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:27:26.330978  666795 cri.go:89] found id: ""
	I1217 21:27:26.331001  666795 logs.go:282] 0 containers: []
	W1217 21:27:26.331009  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:27:26.331028  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:27:26.331091  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:27:26.356233  666795 cri.go:89] found id: ""
	I1217 21:27:26.356256  666795 logs.go:282] 0 containers: []
	W1217 21:27:26.356265  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:27:26.356274  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:27:26.356285  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:27:26.429619  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:27:26.429661  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:27:26.449871  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:27:26.449905  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:27:26.515990  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:27:26.516009  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:27:26.516023  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:27:26.547226  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:27:26.547263  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:27:29.075710  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:29.086906  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:27:29.086976  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:27:29.122062  666795 cri.go:89] found id: ""
	I1217 21:27:29.122083  666795 logs.go:282] 0 containers: []
	W1217 21:27:29.122092  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:27:29.122098  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:27:29.122154  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:27:29.151225  666795 cri.go:89] found id: ""
	I1217 21:27:29.151249  666795 logs.go:282] 0 containers: []
	W1217 21:27:29.151258  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:27:29.151264  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:27:29.151322  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:27:29.178027  666795 cri.go:89] found id: ""
	I1217 21:27:29.178051  666795 logs.go:282] 0 containers: []
	W1217 21:27:29.178060  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:27:29.178067  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:27:29.178123  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:27:29.206100  666795 cri.go:89] found id: ""
	I1217 21:27:29.206129  666795 logs.go:282] 0 containers: []
	W1217 21:27:29.206138  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:27:29.206144  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:27:29.206217  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:27:29.230605  666795 cri.go:89] found id: ""
	I1217 21:27:29.230631  666795 logs.go:282] 0 containers: []
	W1217 21:27:29.230640  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:27:29.230647  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:27:29.230705  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:27:29.256533  666795 cri.go:89] found id: ""
	I1217 21:27:29.256569  666795 logs.go:282] 0 containers: []
	W1217 21:27:29.256579  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:27:29.256585  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:27:29.256642  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:27:29.281663  666795 cri.go:89] found id: ""
	I1217 21:27:29.281700  666795 logs.go:282] 0 containers: []
	W1217 21:27:29.281710  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:27:29.281732  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:27:29.281813  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:27:29.307469  666795 cri.go:89] found id: ""
	I1217 21:27:29.307539  666795 logs.go:282] 0 containers: []
	W1217 21:27:29.307567  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:27:29.307636  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:27:29.307670  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:27:29.374892  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:27:29.374916  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:27:29.374930  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:27:29.405671  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:27:29.405705  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:27:29.436469  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:27:29.436502  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:27:29.506991  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:27:29.507027  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:27:32.024794  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:32.035101  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:27:32.035170  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:27:32.060955  666795 cri.go:89] found id: ""
	I1217 21:27:32.060981  666795 logs.go:282] 0 containers: []
	W1217 21:27:32.060990  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:27:32.060997  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:27:32.061055  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:27:32.103729  666795 cri.go:89] found id: ""
	I1217 21:27:32.103755  666795 logs.go:282] 0 containers: []
	W1217 21:27:32.103777  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:27:32.103784  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:27:32.103854  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:27:32.135463  666795 cri.go:89] found id: ""
	I1217 21:27:32.135486  666795 logs.go:282] 0 containers: []
	W1217 21:27:32.135510  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:27:32.135517  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:27:32.135574  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:27:32.161595  666795 cri.go:89] found id: ""
	I1217 21:27:32.161618  666795 logs.go:282] 0 containers: []
	W1217 21:27:32.161628  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:27:32.161634  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:27:32.161691  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:27:32.186637  666795 cri.go:89] found id: ""
	I1217 21:27:32.186659  666795 logs.go:282] 0 containers: []
	W1217 21:27:32.186667  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:27:32.186673  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:27:32.186730  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:27:32.212583  666795 cri.go:89] found id: ""
	I1217 21:27:32.212609  666795 logs.go:282] 0 containers: []
	W1217 21:27:32.212619  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:27:32.212626  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:27:32.212683  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:27:32.242458  666795 cri.go:89] found id: ""
	I1217 21:27:32.242485  666795 logs.go:282] 0 containers: []
	W1217 21:27:32.242495  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:27:32.242503  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:27:32.242614  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:27:32.270914  666795 cri.go:89] found id: ""
	I1217 21:27:32.270936  666795 logs.go:282] 0 containers: []
	W1217 21:27:32.270944  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:27:32.270954  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:27:32.270965  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:27:32.302676  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:27:32.302708  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:27:32.333709  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:27:32.333736  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:27:32.401928  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:27:32.401964  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:27:32.418183  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:27:32.418219  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:27:32.483550  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:27:34.983790  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:34.993967  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:27:34.994064  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:27:35.023657  666795 cri.go:89] found id: ""
	I1217 21:27:35.023686  666795 logs.go:282] 0 containers: []
	W1217 21:27:35.023703  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:27:35.023710  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:27:35.023771  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:27:35.051230  666795 cri.go:89] found id: ""
	I1217 21:27:35.051255  666795 logs.go:282] 0 containers: []
	W1217 21:27:35.051265  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:27:35.051272  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:27:35.051331  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:27:35.082250  666795 cri.go:89] found id: ""
	I1217 21:27:35.082277  666795 logs.go:282] 0 containers: []
	W1217 21:27:35.082287  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:27:35.082294  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:27:35.082362  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:27:35.114037  666795 cri.go:89] found id: ""
	I1217 21:27:35.114066  666795 logs.go:282] 0 containers: []
	W1217 21:27:35.114076  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:27:35.114083  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:27:35.114152  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:27:35.151111  666795 cri.go:89] found id: ""
	I1217 21:27:35.151135  666795 logs.go:282] 0 containers: []
	W1217 21:27:35.151146  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:27:35.151152  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:27:35.151221  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:27:35.184901  666795 cri.go:89] found id: ""
	I1217 21:27:35.184977  666795 logs.go:282] 0 containers: []
	W1217 21:27:35.185000  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:27:35.185027  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:27:35.185147  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:27:35.211523  666795 cri.go:89] found id: ""
	I1217 21:27:35.211547  666795 logs.go:282] 0 containers: []
	W1217 21:27:35.211557  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:27:35.211563  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:27:35.211681  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:27:35.238238  666795 cri.go:89] found id: ""
	I1217 21:27:35.238266  666795 logs.go:282] 0 containers: []
	W1217 21:27:35.238275  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:27:35.238291  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:27:35.238303  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:27:35.315718  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:27:35.315772  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:27:35.332924  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:27:35.333011  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:27:35.400139  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:27:35.400160  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:27:35.400175  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:27:35.431647  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:27:35.431734  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:27:37.960332  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:37.974258  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:27:37.974344  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:27:38.013762  666795 cri.go:89] found id: ""
	I1217 21:27:38.013783  666795 logs.go:282] 0 containers: []
	W1217 21:27:38.013793  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:27:38.013798  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:27:38.013859  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:27:38.045474  666795 cri.go:89] found id: ""
	I1217 21:27:38.045497  666795 logs.go:282] 0 containers: []
	W1217 21:27:38.045506  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:27:38.045512  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:27:38.045579  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:27:38.079725  666795 cri.go:89] found id: ""
	I1217 21:27:38.079800  666795 logs.go:282] 0 containers: []
	W1217 21:27:38.079812  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:27:38.079819  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:27:38.079915  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:27:38.145895  666795 cri.go:89] found id: ""
	I1217 21:27:38.145967  666795 logs.go:282] 0 containers: []
	W1217 21:27:38.145990  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:27:38.146008  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:27:38.146094  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:27:38.185517  666795 cri.go:89] found id: ""
	I1217 21:27:38.185544  666795 logs.go:282] 0 containers: []
	W1217 21:27:38.185555  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:27:38.185561  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:27:38.185620  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:27:38.221564  666795 cri.go:89] found id: ""
	I1217 21:27:38.221589  666795 logs.go:282] 0 containers: []
	W1217 21:27:38.221598  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:27:38.221604  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:27:38.221662  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:27:38.249685  666795 cri.go:89] found id: ""
	I1217 21:27:38.249710  666795 logs.go:282] 0 containers: []
	W1217 21:27:38.249719  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:27:38.249725  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:27:38.249781  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:27:38.284631  666795 cri.go:89] found id: ""
	I1217 21:27:38.284657  666795 logs.go:282] 0 containers: []
	W1217 21:27:38.284666  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:27:38.284675  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:27:38.284686  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:27:38.365779  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:27:38.365816  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:27:38.387272  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:27:38.387303  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:27:38.510771  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:27:38.510794  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:27:38.510809  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:27:38.562468  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:27:38.562506  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:27:41.099710  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:41.110667  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:27:41.110732  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:27:41.140830  666795 cri.go:89] found id: ""
	I1217 21:27:41.140854  666795 logs.go:282] 0 containers: []
	W1217 21:27:41.140864  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:27:41.140870  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:27:41.140935  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:27:41.166383  666795 cri.go:89] found id: ""
	I1217 21:27:41.166404  666795 logs.go:282] 0 containers: []
	W1217 21:27:41.166413  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:27:41.166419  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:27:41.166478  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:27:41.195183  666795 cri.go:89] found id: ""
	I1217 21:27:41.195206  666795 logs.go:282] 0 containers: []
	W1217 21:27:41.195214  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:27:41.195220  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:27:41.195277  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:27:41.222930  666795 cri.go:89] found id: ""
	I1217 21:27:41.222954  666795 logs.go:282] 0 containers: []
	W1217 21:27:41.222964  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:27:41.222970  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:27:41.223027  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:27:41.248403  666795 cri.go:89] found id: ""
	I1217 21:27:41.248425  666795 logs.go:282] 0 containers: []
	W1217 21:27:41.248434  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:27:41.248441  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:27:41.248511  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:27:41.279678  666795 cri.go:89] found id: ""
	I1217 21:27:41.279708  666795 logs.go:282] 0 containers: []
	W1217 21:27:41.279718  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:27:41.279724  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:27:41.279785  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:27:41.306089  666795 cri.go:89] found id: ""
	I1217 21:27:41.306111  666795 logs.go:282] 0 containers: []
	W1217 21:27:41.306120  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:27:41.306126  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:27:41.306186  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:27:41.331004  666795 cri.go:89] found id: ""
	I1217 21:27:41.331027  666795 logs.go:282] 0 containers: []
	W1217 21:27:41.331036  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:27:41.331045  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:27:41.331056  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:27:41.362084  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:27:41.362118  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:27:41.391756  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:27:41.391787  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:27:41.465390  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:27:41.465430  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:27:41.482920  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:27:41.482949  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:27:41.559432  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:27:44.059705  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:44.071646  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:27:44.071727  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:27:44.104010  666795 cri.go:89] found id: ""
	I1217 21:27:44.104039  666795 logs.go:282] 0 containers: []
	W1217 21:27:44.104048  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:27:44.104055  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:27:44.104117  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:27:44.132986  666795 cri.go:89] found id: ""
	I1217 21:27:44.133014  666795 logs.go:282] 0 containers: []
	W1217 21:27:44.133023  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:27:44.133029  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:27:44.133087  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:27:44.168675  666795 cri.go:89] found id: ""
	I1217 21:27:44.168704  666795 logs.go:282] 0 containers: []
	W1217 21:27:44.168713  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:27:44.168720  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:27:44.168784  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:27:44.195549  666795 cri.go:89] found id: ""
	I1217 21:27:44.195607  666795 logs.go:282] 0 containers: []
	W1217 21:27:44.195618  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:27:44.195624  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:27:44.195694  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:27:44.222319  666795 cri.go:89] found id: ""
	I1217 21:27:44.222390  666795 logs.go:282] 0 containers: []
	W1217 21:27:44.222413  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:27:44.222431  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:27:44.222522  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:27:44.249548  666795 cri.go:89] found id: ""
	I1217 21:27:44.249626  666795 logs.go:282] 0 containers: []
	W1217 21:27:44.249652  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:27:44.249671  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:27:44.249769  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:27:44.280378  666795 cri.go:89] found id: ""
	I1217 21:27:44.280447  666795 logs.go:282] 0 containers: []
	W1217 21:27:44.280470  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:27:44.280485  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:27:44.280561  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:27:44.305775  666795 cri.go:89] found id: ""
	I1217 21:27:44.305804  666795 logs.go:282] 0 containers: []
	W1217 21:27:44.305814  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:27:44.305824  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:27:44.305837  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:27:44.322226  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:27:44.322255  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:27:44.391612  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:27:44.391645  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:27:44.391660  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:27:44.423059  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:27:44.423095  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:27:44.456516  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:27:44.456585  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:27:47.025755  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:47.036128  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:27:47.036200  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:27:47.065861  666795 cri.go:89] found id: ""
	I1217 21:27:47.065892  666795 logs.go:282] 0 containers: []
	W1217 21:27:47.065902  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:27:47.065908  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:27:47.065967  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:27:47.098973  666795 cri.go:89] found id: ""
	I1217 21:27:47.098995  666795 logs.go:282] 0 containers: []
	W1217 21:27:47.099004  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:27:47.099010  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:27:47.099068  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:27:47.130993  666795 cri.go:89] found id: ""
	I1217 21:27:47.131014  666795 logs.go:282] 0 containers: []
	W1217 21:27:47.131022  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:27:47.131028  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:27:47.131086  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:27:47.157970  666795 cri.go:89] found id: ""
	I1217 21:27:47.157993  666795 logs.go:282] 0 containers: []
	W1217 21:27:47.158001  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:27:47.158008  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:27:47.158067  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:27:47.185082  666795 cri.go:89] found id: ""
	I1217 21:27:47.185168  666795 logs.go:282] 0 containers: []
	W1217 21:27:47.185195  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:27:47.185207  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:27:47.185293  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:27:47.210213  666795 cri.go:89] found id: ""
	I1217 21:27:47.210239  666795 logs.go:282] 0 containers: []
	W1217 21:27:47.210247  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:27:47.210254  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:27:47.210317  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:27:47.236729  666795 cri.go:89] found id: ""
	I1217 21:27:47.236752  666795 logs.go:282] 0 containers: []
	W1217 21:27:47.236761  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:27:47.236767  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:27:47.236832  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:27:47.262735  666795 cri.go:89] found id: ""
	I1217 21:27:47.262816  666795 logs.go:282] 0 containers: []
	W1217 21:27:47.262840  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:27:47.262862  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:27:47.262905  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:27:47.334925  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:27:47.334961  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:27:47.357962  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:27:47.358003  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:27:47.442281  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:27:47.442305  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:27:47.442318  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:27:47.473355  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:27:47.473388  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:27:50.007816  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:50.020568  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:27:50.020650  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:27:50.050816  666795 cri.go:89] found id: ""
	I1217 21:27:50.050843  666795 logs.go:282] 0 containers: []
	W1217 21:27:50.050853  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:27:50.050860  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:27:50.050922  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:27:50.089931  666795 cri.go:89] found id: ""
	I1217 21:27:50.090009  666795 logs.go:282] 0 containers: []
	W1217 21:27:50.090034  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:27:50.090066  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:27:50.090150  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:27:50.131718  666795 cri.go:89] found id: ""
	I1217 21:27:50.131747  666795 logs.go:282] 0 containers: []
	W1217 21:27:50.131756  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:27:50.131762  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:27:50.131820  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:27:50.163245  666795 cri.go:89] found id: ""
	I1217 21:27:50.163274  666795 logs.go:282] 0 containers: []
	W1217 21:27:50.163282  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:27:50.163289  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:27:50.163348  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:27:50.190008  666795 cri.go:89] found id: ""
	I1217 21:27:50.190035  666795 logs.go:282] 0 containers: []
	W1217 21:27:50.190043  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:27:50.190050  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:27:50.190131  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:27:50.217574  666795 cri.go:89] found id: ""
	I1217 21:27:50.217643  666795 logs.go:282] 0 containers: []
	W1217 21:27:50.217673  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:27:50.217688  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:27:50.217764  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:27:50.247474  666795 cri.go:89] found id: ""
	I1217 21:27:50.247510  666795 logs.go:282] 0 containers: []
	W1217 21:27:50.247528  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:27:50.247534  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:27:50.247619  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:27:50.273986  666795 cri.go:89] found id: ""
	I1217 21:27:50.274019  666795 logs.go:282] 0 containers: []
	W1217 21:27:50.274028  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:27:50.274037  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:27:50.274049  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:27:50.345750  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:27:50.345787  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:27:50.363265  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:27:50.363303  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:27:50.428698  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:27:50.428721  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:27:50.428735  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:27:50.459711  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:27:50.459746  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:27:52.988497  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:52.998607  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:27:52.998690  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:27:53.028058  666795 cri.go:89] found id: ""
	I1217 21:27:53.028090  666795 logs.go:282] 0 containers: []
	W1217 21:27:53.028100  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:27:53.028107  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:27:53.028184  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:27:53.058511  666795 cri.go:89] found id: ""
	I1217 21:27:53.058535  666795 logs.go:282] 0 containers: []
	W1217 21:27:53.058543  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:27:53.058549  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:27:53.058608  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:27:53.097113  666795 cri.go:89] found id: ""
	I1217 21:27:53.097152  666795 logs.go:282] 0 containers: []
	W1217 21:27:53.097162  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:27:53.097168  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:27:53.097233  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:27:53.126158  666795 cri.go:89] found id: ""
	I1217 21:27:53.126179  666795 logs.go:282] 0 containers: []
	W1217 21:27:53.126187  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:27:53.126193  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:27:53.126251  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:27:53.153518  666795 cri.go:89] found id: ""
	I1217 21:27:53.153540  666795 logs.go:282] 0 containers: []
	W1217 21:27:53.153548  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:27:53.153554  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:27:53.153614  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:27:53.180938  666795 cri.go:89] found id: ""
	I1217 21:27:53.180966  666795 logs.go:282] 0 containers: []
	W1217 21:27:53.180976  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:27:53.180983  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:27:53.181047  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:27:53.207873  666795 cri.go:89] found id: ""
	I1217 21:27:53.207942  666795 logs.go:282] 0 containers: []
	W1217 21:27:53.207968  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:27:53.207983  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:27:53.208054  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:27:53.237732  666795 cri.go:89] found id: ""
	I1217 21:27:53.237757  666795 logs.go:282] 0 containers: []
	W1217 21:27:53.237766  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:27:53.237776  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:27:53.237791  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:27:53.302075  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:27:53.302098  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:27:53.302110  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:27:53.334586  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:27:53.334620  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:27:53.364699  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:27:53.364779  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:27:53.436368  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:27:53.436408  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:27:55.953320  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:55.965656  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:27:55.965729  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:27:55.992365  666795 cri.go:89] found id: ""
	I1217 21:27:55.992387  666795 logs.go:282] 0 containers: []
	W1217 21:27:55.992395  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:27:55.992402  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:27:55.992466  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:27:56.024551  666795 cri.go:89] found id: ""
	I1217 21:27:56.024587  666795 logs.go:282] 0 containers: []
	W1217 21:27:56.024597  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:27:56.024609  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:27:56.024673  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:27:56.051235  666795 cri.go:89] found id: ""
	I1217 21:27:56.051260  666795 logs.go:282] 0 containers: []
	W1217 21:27:56.051269  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:27:56.051275  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:27:56.051336  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:27:56.088141  666795 cri.go:89] found id: ""
	I1217 21:27:56.088164  666795 logs.go:282] 0 containers: []
	W1217 21:27:56.088174  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:27:56.088180  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:27:56.088241  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:27:56.121038  666795 cri.go:89] found id: ""
	I1217 21:27:56.121058  666795 logs.go:282] 0 containers: []
	W1217 21:27:56.121066  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:27:56.121072  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:27:56.121134  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:27:56.151845  666795 cri.go:89] found id: ""
	I1217 21:27:56.151871  666795 logs.go:282] 0 containers: []
	W1217 21:27:56.151881  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:27:56.151887  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:27:56.151947  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:27:56.177711  666795 cri.go:89] found id: ""
	I1217 21:27:56.177737  666795 logs.go:282] 0 containers: []
	W1217 21:27:56.177747  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:27:56.177756  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:27:56.177814  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:27:56.202847  666795 cri.go:89] found id: ""
	I1217 21:27:56.202871  666795 logs.go:282] 0 containers: []
	W1217 21:27:56.202880  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:27:56.202891  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:27:56.202912  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:27:56.270148  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:27:56.270186  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:27:56.288128  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:27:56.288158  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:27:56.357482  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:27:56.357549  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:27:56.357576  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:27:56.388790  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:27:56.388825  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:27:58.919743  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:27:58.929786  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:27:58.929854  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:27:58.955051  666795 cri.go:89] found id: ""
	I1217 21:27:58.955079  666795 logs.go:282] 0 containers: []
	W1217 21:27:58.955088  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:27:58.955094  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:27:58.955154  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:27:58.983012  666795 cri.go:89] found id: ""
	I1217 21:27:58.983047  666795 logs.go:282] 0 containers: []
	W1217 21:27:58.983056  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:27:58.983063  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:27:58.983130  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:27:59.011647  666795 cri.go:89] found id: ""
	I1217 21:27:59.011669  666795 logs.go:282] 0 containers: []
	W1217 21:27:59.011742  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:27:59.011752  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:27:59.011882  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:27:59.038546  666795 cri.go:89] found id: ""
	I1217 21:27:59.038572  666795 logs.go:282] 0 containers: []
	W1217 21:27:59.038581  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:27:59.038588  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:27:59.038646  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:27:59.070743  666795 cri.go:89] found id: ""
	I1217 21:27:59.070766  666795 logs.go:282] 0 containers: []
	W1217 21:27:59.070774  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:27:59.070780  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:27:59.070836  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:27:59.102238  666795 cri.go:89] found id: ""
	I1217 21:27:59.102261  666795 logs.go:282] 0 containers: []
	W1217 21:27:59.102269  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:27:59.102276  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:27:59.102339  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:27:59.136689  666795 cri.go:89] found id: ""
	I1217 21:27:59.136713  666795 logs.go:282] 0 containers: []
	W1217 21:27:59.136721  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:27:59.136728  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:27:59.136789  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:27:59.163095  666795 cri.go:89] found id: ""
	I1217 21:27:59.163118  666795 logs.go:282] 0 containers: []
	W1217 21:27:59.163127  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:27:59.163136  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:27:59.163148  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:27:59.232898  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:27:59.232914  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:27:59.232927  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:27:59.264246  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:27:59.264292  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:27:59.295813  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:27:59.295848  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:27:59.364842  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:27:59.364879  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:28:01.882342  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:28:01.892494  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:28:01.892568  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:28:01.919556  666795 cri.go:89] found id: ""
	I1217 21:28:01.919593  666795 logs.go:282] 0 containers: []
	W1217 21:28:01.919603  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:28:01.919611  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:28:01.919673  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:28:01.954703  666795 cri.go:89] found id: ""
	I1217 21:28:01.954725  666795 logs.go:282] 0 containers: []
	W1217 21:28:01.954735  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:28:01.954741  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:28:01.954798  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:28:01.981436  666795 cri.go:89] found id: ""
	I1217 21:28:01.981465  666795 logs.go:282] 0 containers: []
	W1217 21:28:01.981475  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:28:01.981482  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:28:01.981544  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:28:02.011689  666795 cri.go:89] found id: ""
	I1217 21:28:02.011732  666795 logs.go:282] 0 containers: []
	W1217 21:28:02.011743  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:28:02.011750  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:28:02.011817  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:28:02.040388  666795 cri.go:89] found id: ""
	I1217 21:28:02.040415  666795 logs.go:282] 0 containers: []
	W1217 21:28:02.040425  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:28:02.040431  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:28:02.040492  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:28:02.073079  666795 cri.go:89] found id: ""
	I1217 21:28:02.073107  666795 logs.go:282] 0 containers: []
	W1217 21:28:02.073116  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:28:02.073122  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:28:02.073186  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:28:02.105924  666795 cri.go:89] found id: ""
	I1217 21:28:02.105951  666795 logs.go:282] 0 containers: []
	W1217 21:28:02.105960  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:28:02.105966  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:28:02.106024  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:28:02.137872  666795 cri.go:89] found id: ""
	I1217 21:28:02.137900  666795 logs.go:282] 0 containers: []
	W1217 21:28:02.137909  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:28:02.137918  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:28:02.137930  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:28:02.212276  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:28:02.212295  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:28:02.212341  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:28:02.243953  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:28:02.243989  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:28:02.275895  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:28:02.275924  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:28:02.344861  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:28:02.344897  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:28:04.862069  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:28:04.872142  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:28:04.872211  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:28:04.897407  666795 cri.go:89] found id: ""
	I1217 21:28:04.897435  666795 logs.go:282] 0 containers: []
	W1217 21:28:04.897446  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:28:04.897453  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:28:04.897523  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:28:04.923354  666795 cri.go:89] found id: ""
	I1217 21:28:04.923397  666795 logs.go:282] 0 containers: []
	W1217 21:28:04.923407  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:28:04.923413  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:28:04.923487  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:28:04.952833  666795 cri.go:89] found id: ""
	I1217 21:28:04.952911  666795 logs.go:282] 0 containers: []
	W1217 21:28:04.952934  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:28:04.952948  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:28:04.953030  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:28:04.980755  666795 cri.go:89] found id: ""
	I1217 21:28:04.980822  666795 logs.go:282] 0 containers: []
	W1217 21:28:04.980845  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:28:04.980859  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:28:04.980933  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:28:05.011255  666795 cri.go:89] found id: ""
	I1217 21:28:05.011292  666795 logs.go:282] 0 containers: []
	W1217 21:28:05.011302  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:28:05.011308  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:28:05.011380  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:28:05.039082  666795 cri.go:89] found id: ""
	I1217 21:28:05.039120  666795 logs.go:282] 0 containers: []
	W1217 21:28:05.039131  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:28:05.039138  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:28:05.039212  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:28:05.065040  666795 cri.go:89] found id: ""
	I1217 21:28:05.065070  666795 logs.go:282] 0 containers: []
	W1217 21:28:05.065079  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:28:05.065085  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:28:05.065146  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:28:05.104636  666795 cri.go:89] found id: ""
	I1217 21:28:05.104681  666795 logs.go:282] 0 containers: []
	W1217 21:28:05.104691  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:28:05.104700  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:28:05.104715  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:28:05.182925  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:28:05.182945  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:28:05.182968  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:28:05.214215  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:28:05.214249  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:28:05.246601  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:28:05.246631  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:28:05.314229  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:28:05.314267  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:28:07.832187  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:28:07.842399  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:28:07.842483  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:28:07.872383  666795 cri.go:89] found id: ""
	I1217 21:28:07.872455  666795 logs.go:282] 0 containers: []
	W1217 21:28:07.872481  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:28:07.872495  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:28:07.872575  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:28:07.898835  666795 cri.go:89] found id: ""
	I1217 21:28:07.898873  666795 logs.go:282] 0 containers: []
	W1217 21:28:07.898883  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:28:07.898889  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:28:07.898960  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:28:07.924927  666795 cri.go:89] found id: ""
	I1217 21:28:07.924951  666795 logs.go:282] 0 containers: []
	W1217 21:28:07.924960  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:28:07.924966  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:28:07.925023  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:28:07.951365  666795 cri.go:89] found id: ""
	I1217 21:28:07.951389  666795 logs.go:282] 0 containers: []
	W1217 21:28:07.951399  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:28:07.951406  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:28:07.951473  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:28:07.981343  666795 cri.go:89] found id: ""
	I1217 21:28:07.981421  666795 logs.go:282] 0 containers: []
	W1217 21:28:07.981437  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:28:07.981445  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:28:07.981502  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:28:08.019138  666795 cri.go:89] found id: ""
	I1217 21:28:08.019163  666795 logs.go:282] 0 containers: []
	W1217 21:28:08.019173  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:28:08.019180  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:28:08.019249  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:28:08.047020  666795 cri.go:89] found id: ""
	I1217 21:28:08.047045  666795 logs.go:282] 0 containers: []
	W1217 21:28:08.047055  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:28:08.047063  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:28:08.047131  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:28:08.082006  666795 cri.go:89] found id: ""
	I1217 21:28:08.082029  666795 logs.go:282] 0 containers: []
	W1217 21:28:08.082038  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:28:08.082047  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:28:08.082058  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:28:08.118057  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:28:08.118095  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:28:08.155182  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:28:08.155210  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:28:08.226953  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:28:08.226990  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:28:08.244069  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:28:08.244100  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:28:08.313022  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:28:10.814584  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:28:10.825266  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:28:10.825338  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:28:10.854374  666795 cri.go:89] found id: ""
	I1217 21:28:10.854398  666795 logs.go:282] 0 containers: []
	W1217 21:28:10.854408  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:28:10.854414  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:28:10.854475  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:28:10.880971  666795 cri.go:89] found id: ""
	I1217 21:28:10.880995  666795 logs.go:282] 0 containers: []
	W1217 21:28:10.881016  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:28:10.881022  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:28:10.881080  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:28:10.908177  666795 cri.go:89] found id: ""
	I1217 21:28:10.908201  666795 logs.go:282] 0 containers: []
	W1217 21:28:10.908210  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:28:10.908216  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:28:10.908305  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:28:10.934839  666795 cri.go:89] found id: ""
	I1217 21:28:10.934915  666795 logs.go:282] 0 containers: []
	W1217 21:28:10.934940  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:28:10.934960  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:28:10.935038  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:28:10.962595  666795 cri.go:89] found id: ""
	I1217 21:28:10.962660  666795 logs.go:282] 0 containers: []
	W1217 21:28:10.962686  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:28:10.962706  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:28:10.962777  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:28:10.988171  666795 cri.go:89] found id: ""
	I1217 21:28:10.988236  666795 logs.go:282] 0 containers: []
	W1217 21:28:10.988250  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:28:10.988258  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:28:10.988328  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:28:11.018808  666795 cri.go:89] found id: ""
	I1217 21:28:11.018842  666795 logs.go:282] 0 containers: []
	W1217 21:28:11.018852  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:28:11.018859  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:28:11.018933  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:28:11.046820  666795 cri.go:89] found id: ""
	I1217 21:28:11.046845  666795 logs.go:282] 0 containers: []
	W1217 21:28:11.046854  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:28:11.046864  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:28:11.046886  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:28:11.116605  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:28:11.116644  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:28:11.134366  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:28:11.134398  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:28:11.205945  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:28:11.205969  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:28:11.205984  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:28:11.237994  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:28:11.238027  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:28:13.767742  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:28:13.778142  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:28:13.778219  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:28:13.803981  666795 cri.go:89] found id: ""
	I1217 21:28:13.804006  666795 logs.go:282] 0 containers: []
	W1217 21:28:13.804016  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:28:13.804023  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:28:13.804086  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:28:13.829633  666795 cri.go:89] found id: ""
	I1217 21:28:13.829657  666795 logs.go:282] 0 containers: []
	W1217 21:28:13.829675  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:28:13.829682  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:28:13.829743  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:28:13.859168  666795 cri.go:89] found id: ""
	I1217 21:28:13.859190  666795 logs.go:282] 0 containers: []
	W1217 21:28:13.859199  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:28:13.859205  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:28:13.859265  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:28:13.888609  666795 cri.go:89] found id: ""
	I1217 21:28:13.888632  666795 logs.go:282] 0 containers: []
	W1217 21:28:13.888641  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:28:13.888648  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:28:13.888714  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:28:13.918929  666795 cri.go:89] found id: ""
	I1217 21:28:13.918953  666795 logs.go:282] 0 containers: []
	W1217 21:28:13.918963  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:28:13.918970  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:28:13.919079  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:28:13.945019  666795 cri.go:89] found id: ""
	I1217 21:28:13.945084  666795 logs.go:282] 0 containers: []
	W1217 21:28:13.945099  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:28:13.945106  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:28:13.945168  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:28:13.975475  666795 cri.go:89] found id: ""
	I1217 21:28:13.975500  666795 logs.go:282] 0 containers: []
	W1217 21:28:13.975519  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:28:13.975526  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:28:13.975609  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:28:14.004881  666795 cri.go:89] found id: ""
	I1217 21:28:14.004918  666795 logs.go:282] 0 containers: []
	W1217 21:28:14.004928  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:28:14.004937  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:28:14.004951  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:28:14.080982  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:28:14.081005  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:28:14.081019  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:28:14.114700  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:28:14.114733  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:28:14.147639  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:28:14.147665  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:28:14.216966  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:28:14.217004  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:28:16.736218  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:28:16.746576  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:28:16.746649  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:28:16.772150  666795 cri.go:89] found id: ""
	I1217 21:28:16.772174  666795 logs.go:282] 0 containers: []
	W1217 21:28:16.772183  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:28:16.772190  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:28:16.772250  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:28:16.797147  666795 cri.go:89] found id: ""
	I1217 21:28:16.797171  666795 logs.go:282] 0 containers: []
	W1217 21:28:16.797179  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:28:16.797185  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:28:16.797242  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:28:16.826213  666795 cri.go:89] found id: ""
	I1217 21:28:16.826234  666795 logs.go:282] 0 containers: []
	W1217 21:28:16.826243  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:28:16.826250  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:28:16.826306  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:28:16.862442  666795 cri.go:89] found id: ""
	I1217 21:28:16.862463  666795 logs.go:282] 0 containers: []
	W1217 21:28:16.862472  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:28:16.862480  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:28:16.862578  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:28:16.889804  666795 cri.go:89] found id: ""
	I1217 21:28:16.889826  666795 logs.go:282] 0 containers: []
	W1217 21:28:16.889834  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:28:16.889840  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:28:16.889896  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:28:16.919554  666795 cri.go:89] found id: ""
	I1217 21:28:16.919578  666795 logs.go:282] 0 containers: []
	W1217 21:28:16.919609  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:28:16.919616  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:28:16.919675  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:28:16.945832  666795 cri.go:89] found id: ""
	I1217 21:28:16.945859  666795 logs.go:282] 0 containers: []
	W1217 21:28:16.945868  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:28:16.945874  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:28:16.945938  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:28:16.975420  666795 cri.go:89] found id: ""
	I1217 21:28:16.975445  666795 logs.go:282] 0 containers: []
	W1217 21:28:16.975460  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:28:16.975470  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:28:16.975482  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:28:17.005110  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:28:17.005142  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:28:17.073837  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:28:17.073917  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:28:17.090896  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:28:17.090922  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:28:17.161027  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:28:17.161045  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:28:17.161057  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:28:19.692074  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:28:19.702965  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:28:19.703030  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:28:19.736314  666795 cri.go:89] found id: ""
	I1217 21:28:19.736335  666795 logs.go:282] 0 containers: []
	W1217 21:28:19.736344  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:28:19.736353  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:28:19.736409  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:28:19.764837  666795 cri.go:89] found id: ""
	I1217 21:28:19.764858  666795 logs.go:282] 0 containers: []
	W1217 21:28:19.764867  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:28:19.764873  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:28:19.764931  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:28:19.797812  666795 cri.go:89] found id: ""
	I1217 21:28:19.797833  666795 logs.go:282] 0 containers: []
	W1217 21:28:19.797842  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:28:19.797851  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:28:19.797908  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:28:19.834480  666795 cri.go:89] found id: ""
	I1217 21:28:19.834561  666795 logs.go:282] 0 containers: []
	W1217 21:28:19.834583  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:28:19.834603  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:28:19.834715  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:28:19.861804  666795 cri.go:89] found id: ""
	I1217 21:28:19.861825  666795 logs.go:282] 0 containers: []
	W1217 21:28:19.861834  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:28:19.861840  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:28:19.861898  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:28:19.893761  666795 cri.go:89] found id: ""
	I1217 21:28:19.893782  666795 logs.go:282] 0 containers: []
	W1217 21:28:19.893791  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:28:19.893803  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:28:19.893859  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:28:19.929804  666795 cri.go:89] found id: ""
	I1217 21:28:19.929825  666795 logs.go:282] 0 containers: []
	W1217 21:28:19.929834  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:28:19.929840  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:28:19.929899  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:28:19.963086  666795 cri.go:89] found id: ""
	I1217 21:28:19.963106  666795 logs.go:282] 0 containers: []
	W1217 21:28:19.963115  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:28:19.963123  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:28:19.963138  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:28:20.037285  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:28:20.037405  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:28:20.057907  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:28:20.057989  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:28:20.215868  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:28:20.215940  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:28:20.215971  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:28:20.257562  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:28:20.257614  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:28:22.798496  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:28:22.808458  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:28:22.808547  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:28:22.834570  666795 cri.go:89] found id: ""
	I1217 21:28:22.834597  666795 logs.go:282] 0 containers: []
	W1217 21:28:22.834638  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:28:22.834653  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:28:22.834740  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:28:22.863419  666795 cri.go:89] found id: ""
	I1217 21:28:22.863442  666795 logs.go:282] 0 containers: []
	W1217 21:28:22.863462  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:28:22.863486  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:28:22.863573  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:28:22.890517  666795 cri.go:89] found id: ""
	I1217 21:28:22.890582  666795 logs.go:282] 0 containers: []
	W1217 21:28:22.890607  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:28:22.890626  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:28:22.890698  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:28:22.915545  666795 cri.go:89] found id: ""
	I1217 21:28:22.915633  666795 logs.go:282] 0 containers: []
	W1217 21:28:22.915681  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:28:22.915703  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:28:22.915775  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:28:22.942691  666795 cri.go:89] found id: ""
	I1217 21:28:22.942764  666795 logs.go:282] 0 containers: []
	W1217 21:28:22.942787  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:28:22.942810  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:28:22.942899  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:28:22.969783  666795 cri.go:89] found id: ""
	I1217 21:28:22.969856  666795 logs.go:282] 0 containers: []
	W1217 21:28:22.969880  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:28:22.969899  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:28:22.969966  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:28:22.995364  666795 cri.go:89] found id: ""
	I1217 21:28:22.995395  666795 logs.go:282] 0 containers: []
	W1217 21:28:22.995406  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:28:22.995413  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:28:22.995472  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:28:23.024691  666795 cri.go:89] found id: ""
	I1217 21:28:23.024716  666795 logs.go:282] 0 containers: []
	W1217 21:28:23.024727  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:28:23.024736  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:28:23.024748  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:28:23.094655  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:28:23.094677  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:28:23.094692  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:28:23.129836  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:28:23.129872  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:28:23.161259  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:28:23.161289  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:28:23.229513  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:28:23.229552  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:28:25.746084  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:28:25.755944  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:28:25.756016  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:28:25.782548  666795 cri.go:89] found id: ""
	I1217 21:28:25.782579  666795 logs.go:282] 0 containers: []
	W1217 21:28:25.782594  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:28:25.782600  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:28:25.782657  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:28:25.808847  666795 cri.go:89] found id: ""
	I1217 21:28:25.808870  666795 logs.go:282] 0 containers: []
	W1217 21:28:25.808878  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:28:25.808884  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:28:25.808943  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:28:25.835509  666795 cri.go:89] found id: ""
	I1217 21:28:25.835533  666795 logs.go:282] 0 containers: []
	W1217 21:28:25.835542  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:28:25.835549  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:28:25.835643  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:28:25.861359  666795 cri.go:89] found id: ""
	I1217 21:28:25.861383  666795 logs.go:282] 0 containers: []
	W1217 21:28:25.861391  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:28:25.861397  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:28:25.861484  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:28:25.888079  666795 cri.go:89] found id: ""
	I1217 21:28:25.888101  666795 logs.go:282] 0 containers: []
	W1217 21:28:25.888110  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:28:25.888116  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:28:25.888179  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:28:25.918664  666795 cri.go:89] found id: ""
	I1217 21:28:25.918693  666795 logs.go:282] 0 containers: []
	W1217 21:28:25.918701  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:28:25.918708  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:28:25.918766  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:28:25.947137  666795 cri.go:89] found id: ""
	I1217 21:28:25.947160  666795 logs.go:282] 0 containers: []
	W1217 21:28:25.947169  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:28:25.947175  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:28:25.947235  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:28:25.977522  666795 cri.go:89] found id: ""
	I1217 21:28:25.977556  666795 logs.go:282] 0 containers: []
	W1217 21:28:25.977565  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:28:25.977575  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:28:25.977589  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:28:26.052223  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:28:26.052248  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:28:26.052261  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:28:26.083650  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:28:26.083730  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:28:26.118939  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:28:26.119010  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:28:26.193411  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:28:26.193449  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:28:28.712105  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:28:28.722357  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:28:28.722431  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:28:28.747919  666795 cri.go:89] found id: ""
	I1217 21:28:28.747946  666795 logs.go:282] 0 containers: []
	W1217 21:28:28.747954  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:28:28.747961  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:28:28.748020  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:28:28.772650  666795 cri.go:89] found id: ""
	I1217 21:28:28.772676  666795 logs.go:282] 0 containers: []
	W1217 21:28:28.772686  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:28:28.772692  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:28:28.772757  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:28:28.798408  666795 cri.go:89] found id: ""
	I1217 21:28:28.798435  666795 logs.go:282] 0 containers: []
	W1217 21:28:28.798443  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:28:28.798450  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:28:28.798513  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:28:28.826756  666795 cri.go:89] found id: ""
	I1217 21:28:28.826780  666795 logs.go:282] 0 containers: []
	W1217 21:28:28.826790  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:28:28.826796  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:28:28.826859  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:28:28.852458  666795 cri.go:89] found id: ""
	I1217 21:28:28.852483  666795 logs.go:282] 0 containers: []
	W1217 21:28:28.852496  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:28:28.852502  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:28:28.852569  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:28:28.877209  666795 cri.go:89] found id: ""
	I1217 21:28:28.877242  666795 logs.go:282] 0 containers: []
	W1217 21:28:28.877252  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:28:28.877258  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:28:28.877318  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:28:28.902836  666795 cri.go:89] found id: ""
	I1217 21:28:28.902860  666795 logs.go:282] 0 containers: []
	W1217 21:28:28.902870  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:28:28.902877  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:28:28.902938  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:28:28.930088  666795 cri.go:89] found id: ""
	I1217 21:28:28.930111  666795 logs.go:282] 0 containers: []
	W1217 21:28:28.930120  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:28:28.930129  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:28:28.930140  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:28:28.998247  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:28:28.998294  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:28:29.017330  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:28:29.017360  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:28:29.086506  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:28:29.086590  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:28:29.086621  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:28:29.119085  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:28:29.119177  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:28:31.659045  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:28:31.668941  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:28:31.669007  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:28:31.694293  666795 cri.go:89] found id: ""
	I1217 21:28:31.694320  666795 logs.go:282] 0 containers: []
	W1217 21:28:31.694330  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:28:31.694337  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:28:31.694396  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:28:31.726432  666795 cri.go:89] found id: ""
	I1217 21:28:31.726456  666795 logs.go:282] 0 containers: []
	W1217 21:28:31.726465  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:28:31.726475  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:28:31.726535  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:28:31.751463  666795 cri.go:89] found id: ""
	I1217 21:28:31.751488  666795 logs.go:282] 0 containers: []
	W1217 21:28:31.751498  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:28:31.751504  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:28:31.751559  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:28:31.776678  666795 cri.go:89] found id: ""
	I1217 21:28:31.776701  666795 logs.go:282] 0 containers: []
	W1217 21:28:31.776709  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:28:31.776715  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:28:31.776773  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:28:31.802549  666795 cri.go:89] found id: ""
	I1217 21:28:31.802572  666795 logs.go:282] 0 containers: []
	W1217 21:28:31.802581  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:28:31.802587  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:28:31.802648  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:28:31.835364  666795 cri.go:89] found id: ""
	I1217 21:28:31.835390  666795 logs.go:282] 0 containers: []
	W1217 21:28:31.835401  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:28:31.835407  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:28:31.835468  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:28:31.862814  666795 cri.go:89] found id: ""
	I1217 21:28:31.862840  666795 logs.go:282] 0 containers: []
	W1217 21:28:31.862850  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:28:31.862856  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:28:31.862919  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:28:31.890086  666795 cri.go:89] found id: ""
	I1217 21:28:31.890111  666795 logs.go:282] 0 containers: []
	W1217 21:28:31.890120  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:28:31.890129  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:28:31.890140  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:28:31.960035  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:28:31.960071  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:28:31.976529  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:28:31.976560  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:28:32.052822  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:28:32.052891  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:28:32.052922  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:28:32.089832  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:28:32.089936  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:28:34.621542  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:28:34.631710  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:28:34.631783  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:28:34.657292  666795 cri.go:89] found id: ""
	I1217 21:28:34.657315  666795 logs.go:282] 0 containers: []
	W1217 21:28:34.657324  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:28:34.657331  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:28:34.657392  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:28:34.682446  666795 cri.go:89] found id: ""
	I1217 21:28:34.682520  666795 logs.go:282] 0 containers: []
	W1217 21:28:34.682542  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:28:34.682561  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:28:34.682648  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:28:34.707924  666795 cri.go:89] found id: ""
	I1217 21:28:34.707952  666795 logs.go:282] 0 containers: []
	W1217 21:28:34.707962  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:28:34.707971  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:28:34.708027  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:28:34.733829  666795 cri.go:89] found id: ""
	I1217 21:28:34.733852  666795 logs.go:282] 0 containers: []
	W1217 21:28:34.733861  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:28:34.733868  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:28:34.733927  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:28:34.762923  666795 cri.go:89] found id: ""
	I1217 21:28:34.762944  666795 logs.go:282] 0 containers: []
	W1217 21:28:34.762953  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:28:34.762959  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:28:34.763018  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:28:34.792612  666795 cri.go:89] found id: ""
	I1217 21:28:34.792683  666795 logs.go:282] 0 containers: []
	W1217 21:28:34.792711  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:28:34.792732  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:28:34.792826  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:28:34.823597  666795 cri.go:89] found id: ""
	I1217 21:28:34.823620  666795 logs.go:282] 0 containers: []
	W1217 21:28:34.823630  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:28:34.823637  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:28:34.823696  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:28:34.848838  666795 cri.go:89] found id: ""
	I1217 21:28:34.848861  666795 logs.go:282] 0 containers: []
	W1217 21:28:34.848870  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:28:34.848878  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:28:34.848891  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:28:34.879006  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:28:34.879034  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:28:34.948963  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:28:34.949008  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:28:34.966495  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:28:34.966580  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:28:35.039030  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:28:35.039112  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:28:35.039138  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:28:37.571197  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:28:37.581062  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:28:37.581134  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:28:37.607344  666795 cri.go:89] found id: ""
	I1217 21:28:37.607367  666795 logs.go:282] 0 containers: []
	W1217 21:28:37.607376  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:28:37.607382  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:28:37.607444  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:28:37.634143  666795 cri.go:89] found id: ""
	I1217 21:28:37.634222  666795 logs.go:282] 0 containers: []
	W1217 21:28:37.634245  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:28:37.634265  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:28:37.634361  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:28:37.659909  666795 cri.go:89] found id: ""
	I1217 21:28:37.659933  666795 logs.go:282] 0 containers: []
	W1217 21:28:37.659942  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:28:37.659960  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:28:37.660024  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:28:37.687797  666795 cri.go:89] found id: ""
	I1217 21:28:37.687819  666795 logs.go:282] 0 containers: []
	W1217 21:28:37.687827  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:28:37.687833  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:28:37.687889  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:28:37.713395  666795 cri.go:89] found id: ""
	I1217 21:28:37.713418  666795 logs.go:282] 0 containers: []
	W1217 21:28:37.713425  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:28:37.713432  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:28:37.713506  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:28:37.746068  666795 cri.go:89] found id: ""
	I1217 21:28:37.746090  666795 logs.go:282] 0 containers: []
	W1217 21:28:37.746099  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:28:37.746105  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:28:37.746195  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:28:37.772444  666795 cri.go:89] found id: ""
	I1217 21:28:37.772481  666795 logs.go:282] 0 containers: []
	W1217 21:28:37.772491  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:28:37.772498  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:28:37.772561  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:28:37.798808  666795 cri.go:89] found id: ""
	I1217 21:28:37.798833  666795 logs.go:282] 0 containers: []
	W1217 21:28:37.798842  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:28:37.798851  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:28:37.798866  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:28:37.815270  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:28:37.815301  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:28:37.877614  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:28:37.877638  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:28:37.877672  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:28:37.909008  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:28:37.909044  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:28:37.939786  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:28:37.939815  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:28:40.510990  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:28:40.521108  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:28:40.521179  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:28:40.565987  666795 cri.go:89] found id: ""
	I1217 21:28:40.566009  666795 logs.go:282] 0 containers: []
	W1217 21:28:40.566018  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:28:40.566028  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:28:40.566086  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:28:40.599463  666795 cri.go:89] found id: ""
	I1217 21:28:40.599484  666795 logs.go:282] 0 containers: []
	W1217 21:28:40.599492  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:28:40.599498  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:28:40.599555  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:28:40.639897  666795 cri.go:89] found id: ""
	I1217 21:28:40.639920  666795 logs.go:282] 0 containers: []
	W1217 21:28:40.639930  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:28:40.639936  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:28:40.639996  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:28:40.674773  666795 cri.go:89] found id: ""
	I1217 21:28:40.674797  666795 logs.go:282] 0 containers: []
	W1217 21:28:40.674805  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:28:40.674811  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:28:40.674872  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:28:40.708675  666795 cri.go:89] found id: ""
	I1217 21:28:40.708699  666795 logs.go:282] 0 containers: []
	W1217 21:28:40.708718  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:28:40.708726  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:28:40.708793  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:28:40.747002  666795 cri.go:89] found id: ""
	I1217 21:28:40.747075  666795 logs.go:282] 0 containers: []
	W1217 21:28:40.747097  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:28:40.747118  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:28:40.747205  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:28:40.776485  666795 cri.go:89] found id: ""
	I1217 21:28:40.776568  666795 logs.go:282] 0 containers: []
	W1217 21:28:40.776591  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:28:40.776611  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:28:40.776694  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:28:40.809777  666795 cri.go:89] found id: ""
	I1217 21:28:40.809849  666795 logs.go:282] 0 containers: []
	W1217 21:28:40.809871  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:28:40.809893  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:28:40.809933  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:28:40.849162  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:28:40.849200  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:28:40.928548  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:28:40.928589  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:28:40.948585  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:28:40.948619  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:28:41.030278  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:28:41.030311  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:28:41.030323  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:28:43.568575  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:28:43.578849  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:28:43.578943  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:28:43.608792  666795 cri.go:89] found id: ""
	I1217 21:28:43.608815  666795 logs.go:282] 0 containers: []
	W1217 21:28:43.608824  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:28:43.608831  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:28:43.608889  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:28:43.640191  666795 cri.go:89] found id: ""
	I1217 21:28:43.640214  666795 logs.go:282] 0 containers: []
	W1217 21:28:43.640223  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:28:43.640229  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:28:43.640290  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:28:43.666990  666795 cri.go:89] found id: ""
	I1217 21:28:43.667016  666795 logs.go:282] 0 containers: []
	W1217 21:28:43.667025  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:28:43.667031  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:28:43.667089  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:28:43.693058  666795 cri.go:89] found id: ""
	I1217 21:28:43.693080  666795 logs.go:282] 0 containers: []
	W1217 21:28:43.693089  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:28:43.693097  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:28:43.693160  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:28:43.722341  666795 cri.go:89] found id: ""
	I1217 21:28:43.722364  666795 logs.go:282] 0 containers: []
	W1217 21:28:43.722373  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:28:43.722380  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:28:43.722446  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:28:43.756680  666795 cri.go:89] found id: ""
	I1217 21:28:43.756702  666795 logs.go:282] 0 containers: []
	W1217 21:28:43.756711  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:28:43.756719  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:28:43.756809  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:28:43.783249  666795 cri.go:89] found id: ""
	I1217 21:28:43.783272  666795 logs.go:282] 0 containers: []
	W1217 21:28:43.783281  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:28:43.783287  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:28:43.783355  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:28:43.809593  666795 cri.go:89] found id: ""
	I1217 21:28:43.809615  666795 logs.go:282] 0 containers: []
	W1217 21:28:43.809623  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:28:43.809633  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:28:43.809645  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:28:43.878044  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:28:43.878081  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:28:43.894575  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:28:43.894606  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:28:43.958458  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:28:43.958546  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:28:43.958578  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:28:43.991614  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:28:43.991688  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:28:46.534021  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:28:46.546413  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:28:46.546482  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:28:46.574421  666795 cri.go:89] found id: ""
	I1217 21:28:46.574445  666795 logs.go:282] 0 containers: []
	W1217 21:28:46.574453  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:28:46.574460  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:28:46.574518  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:28:46.601048  666795 cri.go:89] found id: ""
	I1217 21:28:46.601070  666795 logs.go:282] 0 containers: []
	W1217 21:28:46.601079  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:28:46.601085  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:28:46.601144  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:28:46.627369  666795 cri.go:89] found id: ""
	I1217 21:28:46.627391  666795 logs.go:282] 0 containers: []
	W1217 21:28:46.627402  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:28:46.627407  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:28:46.627468  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:28:46.653144  666795 cri.go:89] found id: ""
	I1217 21:28:46.653166  666795 logs.go:282] 0 containers: []
	W1217 21:28:46.653175  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:28:46.653182  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:28:46.653243  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:28:46.681200  666795 cri.go:89] found id: ""
	I1217 21:28:46.681223  666795 logs.go:282] 0 containers: []
	W1217 21:28:46.681233  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:28:46.681239  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:28:46.681301  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:28:46.706318  666795 cri.go:89] found id: ""
	I1217 21:28:46.706340  666795 logs.go:282] 0 containers: []
	W1217 21:28:46.706350  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:28:46.706356  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:28:46.706412  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:28:46.734902  666795 cri.go:89] found id: ""
	I1217 21:28:46.734929  666795 logs.go:282] 0 containers: []
	W1217 21:28:46.734939  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:28:46.734946  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:28:46.735012  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:28:46.760717  666795 cri.go:89] found id: ""
	I1217 21:28:46.760740  666795 logs.go:282] 0 containers: []
	W1217 21:28:46.760748  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:28:46.760757  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:28:46.760769  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:28:46.828817  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:28:46.828853  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:28:46.845611  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:28:46.845640  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:28:46.910749  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:28:46.910766  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:28:46.910778  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:28:46.941481  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:28:46.941519  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:28:49.480998  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:28:49.490844  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:28:49.490913  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:28:49.518069  666795 cri.go:89] found id: ""
	I1217 21:28:49.518101  666795 logs.go:282] 0 containers: []
	W1217 21:28:49.518111  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:28:49.518117  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:28:49.518187  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:28:49.547856  666795 cri.go:89] found id: ""
	I1217 21:28:49.547879  666795 logs.go:282] 0 containers: []
	W1217 21:28:49.547888  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:28:49.547894  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:28:49.547951  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:28:49.575923  666795 cri.go:89] found id: ""
	I1217 21:28:49.575948  666795 logs.go:282] 0 containers: []
	W1217 21:28:49.575957  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:28:49.575963  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:28:49.576025  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:28:49.601486  666795 cri.go:89] found id: ""
	I1217 21:28:49.601512  666795 logs.go:282] 0 containers: []
	W1217 21:28:49.601522  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:28:49.601529  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:28:49.601590  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:28:49.627148  666795 cri.go:89] found id: ""
	I1217 21:28:49.627177  666795 logs.go:282] 0 containers: []
	W1217 21:28:49.627187  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:28:49.627193  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:28:49.627255  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:28:49.654254  666795 cri.go:89] found id: ""
	I1217 21:28:49.654278  666795 logs.go:282] 0 containers: []
	W1217 21:28:49.654287  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:28:49.654294  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:28:49.654365  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:28:49.681027  666795 cri.go:89] found id: ""
	I1217 21:28:49.681057  666795 logs.go:282] 0 containers: []
	W1217 21:28:49.681066  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:28:49.681072  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:28:49.681131  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:28:49.710345  666795 cri.go:89] found id: ""
	I1217 21:28:49.710372  666795 logs.go:282] 0 containers: []
	W1217 21:28:49.710382  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:28:49.710392  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:28:49.710443  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:28:49.779450  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:28:49.779488  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:28:49.796309  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:28:49.796343  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:28:49.864217  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:28:49.864242  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:28:49.864255  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:28:49.895189  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:28:49.895224  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:28:52.424899  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:28:52.434967  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:28:52.435039  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:28:52.462447  666795 cri.go:89] found id: ""
	I1217 21:28:52.462470  666795 logs.go:282] 0 containers: []
	W1217 21:28:52.462478  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:28:52.462484  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:28:52.462547  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:28:52.487352  666795 cri.go:89] found id: ""
	I1217 21:28:52.487373  666795 logs.go:282] 0 containers: []
	W1217 21:28:52.487382  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:28:52.487388  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:28:52.487446  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:28:52.514132  666795 cri.go:89] found id: ""
	I1217 21:28:52.514158  666795 logs.go:282] 0 containers: []
	W1217 21:28:52.514167  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:28:52.514174  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:28:52.514232  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:28:52.544007  666795 cri.go:89] found id: ""
	I1217 21:28:52.544031  666795 logs.go:282] 0 containers: []
	W1217 21:28:52.544040  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:28:52.544046  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:28:52.544108  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:28:52.570290  666795 cri.go:89] found id: ""
	I1217 21:28:52.570312  666795 logs.go:282] 0 containers: []
	W1217 21:28:52.570320  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:28:52.570327  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:28:52.570384  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:28:52.599016  666795 cri.go:89] found id: ""
	I1217 21:28:52.599041  666795 logs.go:282] 0 containers: []
	W1217 21:28:52.599051  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:28:52.599058  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:28:52.599114  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:28:52.624364  666795 cri.go:89] found id: ""
	I1217 21:28:52.624389  666795 logs.go:282] 0 containers: []
	W1217 21:28:52.624398  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:28:52.624403  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:28:52.624462  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:28:52.650375  666795 cri.go:89] found id: ""
	I1217 21:28:52.650400  666795 logs.go:282] 0 containers: []
	W1217 21:28:52.650409  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:28:52.650418  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:28:52.650442  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:28:52.678329  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:28:52.678355  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:28:52.748734  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:28:52.748771  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:28:52.765635  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:28:52.765665  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:28:52.829758  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:28:52.829779  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:28:52.829792  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:28:55.360729  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:28:55.373220  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:28:55.373287  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:28:55.397784  666795 cri.go:89] found id: ""
	I1217 21:28:55.397808  666795 logs.go:282] 0 containers: []
	W1217 21:28:55.397817  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:28:55.397823  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:28:55.397878  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:28:55.423792  666795 cri.go:89] found id: ""
	I1217 21:28:55.423815  666795 logs.go:282] 0 containers: []
	W1217 21:28:55.423824  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:28:55.423829  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:28:55.423889  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:28:55.448459  666795 cri.go:89] found id: ""
	I1217 21:28:55.448485  666795 logs.go:282] 0 containers: []
	W1217 21:28:55.448499  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:28:55.448505  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:28:55.448567  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:28:55.473010  666795 cri.go:89] found id: ""
	I1217 21:28:55.473035  666795 logs.go:282] 0 containers: []
	W1217 21:28:55.473045  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:28:55.473051  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:28:55.473111  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:28:55.497383  666795 cri.go:89] found id: ""
	I1217 21:28:55.497407  666795 logs.go:282] 0 containers: []
	W1217 21:28:55.497415  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:28:55.497421  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:28:55.497478  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:28:55.522537  666795 cri.go:89] found id: ""
	I1217 21:28:55.522615  666795 logs.go:282] 0 containers: []
	W1217 21:28:55.522627  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:28:55.522634  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:28:55.522693  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:28:55.552524  666795 cri.go:89] found id: ""
	I1217 21:28:55.552547  666795 logs.go:282] 0 containers: []
	W1217 21:28:55.552556  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:28:55.552562  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:28:55.552625  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:28:55.579496  666795 cri.go:89] found id: ""
	I1217 21:28:55.579524  666795 logs.go:282] 0 containers: []
	W1217 21:28:55.579533  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:28:55.579542  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:28:55.579553  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:28:55.646689  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:28:55.646723  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:28:55.664340  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:28:55.664367  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:28:55.729208  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:28:55.729238  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:28:55.729250  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:28:55.761297  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:28:55.761333  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:28:58.293770  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:28:58.303438  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:28:58.303514  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:28:58.340509  666795 cri.go:89] found id: ""
	I1217 21:28:58.340532  666795 logs.go:282] 0 containers: []
	W1217 21:28:58.340541  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:28:58.340547  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:28:58.340605  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:28:58.376075  666795 cri.go:89] found id: ""
	I1217 21:28:58.376102  666795 logs.go:282] 0 containers: []
	W1217 21:28:58.376112  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:28:58.376118  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:28:58.376175  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:28:58.405569  666795 cri.go:89] found id: ""
	I1217 21:28:58.405592  666795 logs.go:282] 0 containers: []
	W1217 21:28:58.405601  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:28:58.405607  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:28:58.405664  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:28:58.436900  666795 cri.go:89] found id: ""
	I1217 21:28:58.436924  666795 logs.go:282] 0 containers: []
	W1217 21:28:58.436932  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:28:58.436939  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:28:58.437001  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:28:58.463174  666795 cri.go:89] found id: ""
	I1217 21:28:58.463198  666795 logs.go:282] 0 containers: []
	W1217 21:28:58.463207  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:28:58.463215  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:28:58.463302  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:28:58.488652  666795 cri.go:89] found id: ""
	I1217 21:28:58.488675  666795 logs.go:282] 0 containers: []
	W1217 21:28:58.488684  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:28:58.488690  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:28:58.488780  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:28:58.518779  666795 cri.go:89] found id: ""
	I1217 21:28:58.518803  666795 logs.go:282] 0 containers: []
	W1217 21:28:58.518812  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:28:58.518818  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:28:58.518913  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:28:58.549084  666795 cri.go:89] found id: ""
	I1217 21:28:58.549109  666795 logs.go:282] 0 containers: []
	W1217 21:28:58.549118  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:28:58.549126  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:28:58.549137  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:28:58.578552  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:28:58.578580  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:28:58.646807  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:28:58.646846  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:28:58.663197  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:28:58.663229  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:28:58.731268  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:28:58.731287  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:28:58.731300  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:29:01.263703  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:29:01.275022  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:29:01.275119  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:29:01.311534  666795 cri.go:89] found id: ""
	I1217 21:29:01.311556  666795 logs.go:282] 0 containers: []
	W1217 21:29:01.311564  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:29:01.311570  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:29:01.311648  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:29:01.405777  666795 cri.go:89] found id: ""
	I1217 21:29:01.405799  666795 logs.go:282] 0 containers: []
	W1217 21:29:01.405808  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:29:01.405813  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:29:01.405868  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:29:01.469560  666795 cri.go:89] found id: ""
	I1217 21:29:01.469584  666795 logs.go:282] 0 containers: []
	W1217 21:29:01.469593  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:29:01.469599  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:29:01.469659  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:29:01.504567  666795 cri.go:89] found id: ""
	I1217 21:29:01.504589  666795 logs.go:282] 0 containers: []
	W1217 21:29:01.504598  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:29:01.504604  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:29:01.504682  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:29:01.554319  666795 cri.go:89] found id: ""
	I1217 21:29:01.554341  666795 logs.go:282] 0 containers: []
	W1217 21:29:01.554350  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:29:01.554356  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:29:01.554417  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:29:01.598354  666795 cri.go:89] found id: ""
	I1217 21:29:01.598379  666795 logs.go:282] 0 containers: []
	W1217 21:29:01.598388  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:29:01.598395  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:29:01.598454  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:29:01.633749  666795 cri.go:89] found id: ""
	I1217 21:29:01.633778  666795 logs.go:282] 0 containers: []
	W1217 21:29:01.633787  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:29:01.633793  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:29:01.633851  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:29:01.673647  666795 cri.go:89] found id: ""
	I1217 21:29:01.673670  666795 logs.go:282] 0 containers: []
	W1217 21:29:01.673678  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:29:01.673687  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:29:01.673699  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:29:01.754003  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:29:01.754043  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:29:01.772923  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:29:01.772954  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:29:01.871467  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:29:01.871489  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:29:01.871502  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:29:01.905937  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:29:01.905973  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:29:04.451053  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:29:04.461070  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:29:04.461143  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:29:04.487074  666795 cri.go:89] found id: ""
	I1217 21:29:04.487098  666795 logs.go:282] 0 containers: []
	W1217 21:29:04.487108  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:29:04.487115  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:29:04.487174  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:29:04.516268  666795 cri.go:89] found id: ""
	I1217 21:29:04.516290  666795 logs.go:282] 0 containers: []
	W1217 21:29:04.516299  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:29:04.516305  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:29:04.516364  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:29:04.543005  666795 cri.go:89] found id: ""
	I1217 21:29:04.543040  666795 logs.go:282] 0 containers: []
	W1217 21:29:04.543052  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:29:04.543061  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:29:04.543123  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:29:04.569600  666795 cri.go:89] found id: ""
	I1217 21:29:04.569625  666795 logs.go:282] 0 containers: []
	W1217 21:29:04.569634  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:29:04.569640  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:29:04.569699  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:29:04.595215  666795 cri.go:89] found id: ""
	I1217 21:29:04.595285  666795 logs.go:282] 0 containers: []
	W1217 21:29:04.595308  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:29:04.595333  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:29:04.595408  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:29:04.622066  666795 cri.go:89] found id: ""
	I1217 21:29:04.622131  666795 logs.go:282] 0 containers: []
	W1217 21:29:04.622154  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:29:04.622174  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:29:04.622249  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:29:04.653757  666795 cri.go:89] found id: ""
	I1217 21:29:04.653825  666795 logs.go:282] 0 containers: []
	W1217 21:29:04.653851  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:29:04.653869  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:29:04.653942  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:29:04.679943  666795 cri.go:89] found id: ""
	I1217 21:29:04.680010  666795 logs.go:282] 0 containers: []
	W1217 21:29:04.680035  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:29:04.680057  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:29:04.680084  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:29:04.711139  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:29:04.711176  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:29:04.743012  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:29:04.743040  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:29:04.814826  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:29:04.814864  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:29:04.831795  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:29:04.831830  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:29:04.898112  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:29:07.398373  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:29:07.408239  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:29:07.408307  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:29:07.435089  666795 cri.go:89] found id: ""
	I1217 21:29:07.435111  666795 logs.go:282] 0 containers: []
	W1217 21:29:07.435119  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:29:07.435125  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:29:07.435182  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:29:07.461687  666795 cri.go:89] found id: ""
	I1217 21:29:07.461714  666795 logs.go:282] 0 containers: []
	W1217 21:29:07.461723  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:29:07.461729  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:29:07.461786  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:29:07.490631  666795 cri.go:89] found id: ""
	I1217 21:29:07.490659  666795 logs.go:282] 0 containers: []
	W1217 21:29:07.490669  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:29:07.490676  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:29:07.490734  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:29:07.519226  666795 cri.go:89] found id: ""
	I1217 21:29:07.519251  666795 logs.go:282] 0 containers: []
	W1217 21:29:07.519261  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:29:07.519268  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:29:07.519328  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:29:07.544048  666795 cri.go:89] found id: ""
	I1217 21:29:07.544071  666795 logs.go:282] 0 containers: []
	W1217 21:29:07.544079  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:29:07.544085  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:29:07.544145  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:29:07.569984  666795 cri.go:89] found id: ""
	I1217 21:29:07.570005  666795 logs.go:282] 0 containers: []
	W1217 21:29:07.570013  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:29:07.570019  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:29:07.570078  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:29:07.594396  666795 cri.go:89] found id: ""
	I1217 21:29:07.594417  666795 logs.go:282] 0 containers: []
	W1217 21:29:07.594426  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:29:07.594432  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:29:07.594490  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:29:07.624737  666795 cri.go:89] found id: ""
	I1217 21:29:07.624762  666795 logs.go:282] 0 containers: []
	W1217 21:29:07.624771  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:29:07.624780  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:29:07.624792  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:29:07.700163  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:29:07.700207  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:29:07.716686  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:29:07.716718  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:29:07.784593  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:29:07.784617  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:29:07.784633  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:29:07.815565  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:29:07.815618  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:29:10.351726  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:29:10.361920  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:29:10.362001  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:29:10.390828  666795 cri.go:89] found id: ""
	I1217 21:29:10.390854  666795 logs.go:282] 0 containers: []
	W1217 21:29:10.390870  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:29:10.390876  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:29:10.390934  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:29:10.417285  666795 cri.go:89] found id: ""
	I1217 21:29:10.417308  666795 logs.go:282] 0 containers: []
	W1217 21:29:10.417316  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:29:10.417322  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:29:10.417380  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:29:10.447875  666795 cri.go:89] found id: ""
	I1217 21:29:10.447902  666795 logs.go:282] 0 containers: []
	W1217 21:29:10.447917  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:29:10.447924  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:29:10.448005  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:29:10.481436  666795 cri.go:89] found id: ""
	I1217 21:29:10.481459  666795 logs.go:282] 0 containers: []
	W1217 21:29:10.481468  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:29:10.481474  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:29:10.481533  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:29:10.512430  666795 cri.go:89] found id: ""
	I1217 21:29:10.512460  666795 logs.go:282] 0 containers: []
	W1217 21:29:10.512470  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:29:10.512477  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:29:10.512554  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:29:10.547154  666795 cri.go:89] found id: ""
	I1217 21:29:10.547179  666795 logs.go:282] 0 containers: []
	W1217 21:29:10.547188  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:29:10.547208  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:29:10.547266  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:29:10.575149  666795 cri.go:89] found id: ""
	I1217 21:29:10.575174  666795 logs.go:282] 0 containers: []
	W1217 21:29:10.575183  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:29:10.575189  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:29:10.575258  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:29:10.601151  666795 cri.go:89] found id: ""
	I1217 21:29:10.601174  666795 logs.go:282] 0 containers: []
	W1217 21:29:10.601182  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:29:10.601191  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:29:10.601202  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:29:10.669036  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:29:10.669074  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:29:10.685860  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:29:10.685891  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:29:10.753475  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:29:10.753498  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:29:10.753512  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:29:10.785951  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:29:10.785986  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:29:13.318336  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:29:13.329911  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:29:13.329975  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:29:13.368651  666795 cri.go:89] found id: ""
	I1217 21:29:13.368671  666795 logs.go:282] 0 containers: []
	W1217 21:29:13.368680  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:29:13.368685  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:29:13.368743  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:29:13.411008  666795 cri.go:89] found id: ""
	I1217 21:29:13.411029  666795 logs.go:282] 0 containers: []
	W1217 21:29:13.411038  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:29:13.411044  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:29:13.411104  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:29:13.437641  666795 cri.go:89] found id: ""
	I1217 21:29:13.437664  666795 logs.go:282] 0 containers: []
	W1217 21:29:13.437673  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:29:13.437679  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:29:13.437740  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:29:13.463824  666795 cri.go:89] found id: ""
	I1217 21:29:13.463847  666795 logs.go:282] 0 containers: []
	W1217 21:29:13.463916  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:29:13.463932  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:29:13.464009  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:29:13.504323  666795 cri.go:89] found id: ""
	I1217 21:29:13.504352  666795 logs.go:282] 0 containers: []
	W1217 21:29:13.504362  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:29:13.504368  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:29:13.504428  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:29:13.536480  666795 cri.go:89] found id: ""
	I1217 21:29:13.536548  666795 logs.go:282] 0 containers: []
	W1217 21:29:13.536571  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:29:13.536589  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:29:13.536664  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:29:13.564753  666795 cri.go:89] found id: ""
	I1217 21:29:13.564825  666795 logs.go:282] 0 containers: []
	W1217 21:29:13.564851  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:29:13.564870  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:29:13.564954  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:29:13.590357  666795 cri.go:89] found id: ""
	I1217 21:29:13.590427  666795 logs.go:282] 0 containers: []
	W1217 21:29:13.590451  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:29:13.590473  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:29:13.590513  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:29:13.659336  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:29:13.659375  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:29:13.676825  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:29:13.676858  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:29:13.740670  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:29:13.740689  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:29:13.740702  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:29:13.771807  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:29:13.771844  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:29:16.302284  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:29:16.312248  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:29:16.312315  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:29:16.341652  666795 cri.go:89] found id: ""
	I1217 21:29:16.341677  666795 logs.go:282] 0 containers: []
	W1217 21:29:16.341686  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:29:16.341692  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:29:16.341752  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:29:16.382001  666795 cri.go:89] found id: ""
	I1217 21:29:16.382025  666795 logs.go:282] 0 containers: []
	W1217 21:29:16.382034  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:29:16.382040  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:29:16.382099  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:29:16.411665  666795 cri.go:89] found id: ""
	I1217 21:29:16.411688  666795 logs.go:282] 0 containers: []
	W1217 21:29:16.411696  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:29:16.411702  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:29:16.411767  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:29:16.437987  666795 cri.go:89] found id: ""
	I1217 21:29:16.438010  666795 logs.go:282] 0 containers: []
	W1217 21:29:16.438018  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:29:16.438025  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:29:16.438093  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:29:16.468623  666795 cri.go:89] found id: ""
	I1217 21:29:16.468645  666795 logs.go:282] 0 containers: []
	W1217 21:29:16.468654  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:29:16.468660  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:29:16.468718  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:29:16.493496  666795 cri.go:89] found id: ""
	I1217 21:29:16.493519  666795 logs.go:282] 0 containers: []
	W1217 21:29:16.493527  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:29:16.493533  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:29:16.493593  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:29:16.523616  666795 cri.go:89] found id: ""
	I1217 21:29:16.523682  666795 logs.go:282] 0 containers: []
	W1217 21:29:16.523705  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:29:16.523723  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:29:16.523800  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:29:16.550431  666795 cri.go:89] found id: ""
	I1217 21:29:16.550499  666795 logs.go:282] 0 containers: []
	W1217 21:29:16.550526  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:29:16.550547  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:29:16.550574  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:29:16.618533  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:29:16.618570  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:29:16.635041  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:29:16.635124  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:29:16.703312  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:29:16.703330  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:29:16.703342  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:29:16.734316  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:29:16.734348  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:29:19.264703  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:29:19.274766  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:29:19.274834  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:29:19.302503  666795 cri.go:89] found id: ""
	I1217 21:29:19.302529  666795 logs.go:282] 0 containers: []
	W1217 21:29:19.302539  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:29:19.302545  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:29:19.302608  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:29:19.337255  666795 cri.go:89] found id: ""
	I1217 21:29:19.337280  666795 logs.go:282] 0 containers: []
	W1217 21:29:19.337289  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:29:19.337300  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:29:19.337361  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:29:19.369199  666795 cri.go:89] found id: ""
	I1217 21:29:19.369221  666795 logs.go:282] 0 containers: []
	W1217 21:29:19.369230  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:29:19.369236  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:29:19.369308  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:29:19.401409  666795 cri.go:89] found id: ""
	I1217 21:29:19.401432  666795 logs.go:282] 0 containers: []
	W1217 21:29:19.401441  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:29:19.401447  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:29:19.401507  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:29:19.427431  666795 cri.go:89] found id: ""
	I1217 21:29:19.427454  666795 logs.go:282] 0 containers: []
	W1217 21:29:19.427463  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:29:19.427468  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:29:19.427526  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:29:19.454602  666795 cri.go:89] found id: ""
	I1217 21:29:19.454625  666795 logs.go:282] 0 containers: []
	W1217 21:29:19.454633  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:29:19.454640  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:29:19.454700  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:29:19.483126  666795 cri.go:89] found id: ""
	I1217 21:29:19.483148  666795 logs.go:282] 0 containers: []
	W1217 21:29:19.483157  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:29:19.483163  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:29:19.483231  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:29:19.511014  666795 cri.go:89] found id: ""
	I1217 21:29:19.511039  666795 logs.go:282] 0 containers: []
	W1217 21:29:19.511047  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:29:19.511056  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:29:19.511068  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:29:19.593706  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:29:19.593741  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:29:19.618687  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:29:19.618718  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:29:19.707254  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:29:19.707277  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:29:19.707289  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:29:19.748744  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:29:19.748854  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:29:22.302387  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:29:22.313196  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:29:22.313268  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:29:22.341422  666795 cri.go:89] found id: ""
	I1217 21:29:22.341447  666795 logs.go:282] 0 containers: []
	W1217 21:29:22.341461  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:29:22.341467  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:29:22.341523  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:29:22.370252  666795 cri.go:89] found id: ""
	I1217 21:29:22.370276  666795 logs.go:282] 0 containers: []
	W1217 21:29:22.370284  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:29:22.370291  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:29:22.370346  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:29:22.397840  666795 cri.go:89] found id: ""
	I1217 21:29:22.397862  666795 logs.go:282] 0 containers: []
	W1217 21:29:22.397870  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:29:22.397876  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:29:22.397932  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:29:22.424243  666795 cri.go:89] found id: ""
	I1217 21:29:22.424267  666795 logs.go:282] 0 containers: []
	W1217 21:29:22.424276  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:29:22.424282  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:29:22.424340  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:29:22.450330  666795 cri.go:89] found id: ""
	I1217 21:29:22.450353  666795 logs.go:282] 0 containers: []
	W1217 21:29:22.450361  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:29:22.450367  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:29:22.450424  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:29:22.476611  666795 cri.go:89] found id: ""
	I1217 21:29:22.476636  666795 logs.go:282] 0 containers: []
	W1217 21:29:22.476646  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:29:22.476653  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:29:22.476712  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:29:22.509443  666795 cri.go:89] found id: ""
	I1217 21:29:22.509469  666795 logs.go:282] 0 containers: []
	W1217 21:29:22.509479  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:29:22.509486  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:29:22.509578  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:29:22.536039  666795 cri.go:89] found id: ""
	I1217 21:29:22.536062  666795 logs.go:282] 0 containers: []
	W1217 21:29:22.536070  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:29:22.536080  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:29:22.536094  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:29:22.607753  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:29:22.607791  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:29:22.624687  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:29:22.624717  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:29:22.691498  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:29:22.691520  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:29:22.691535  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:29:22.722490  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:29:22.722527  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:29:25.251325  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:29:25.262684  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:29:25.262756  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:29:25.289873  666795 cri.go:89] found id: ""
	I1217 21:29:25.289895  666795 logs.go:282] 0 containers: []
	W1217 21:29:25.289908  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:29:25.289915  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:29:25.289972  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:29:25.333793  666795 cri.go:89] found id: ""
	I1217 21:29:25.333814  666795 logs.go:282] 0 containers: []
	W1217 21:29:25.333823  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:29:25.333830  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:29:25.333887  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:29:25.362032  666795 cri.go:89] found id: ""
	I1217 21:29:25.362054  666795 logs.go:282] 0 containers: []
	W1217 21:29:25.362063  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:29:25.362068  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:29:25.362128  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:29:25.399025  666795 cri.go:89] found id: ""
	I1217 21:29:25.399086  666795 logs.go:282] 0 containers: []
	W1217 21:29:25.399109  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:29:25.399131  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:29:25.399218  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:29:25.428132  666795 cri.go:89] found id: ""
	I1217 21:29:25.428154  666795 logs.go:282] 0 containers: []
	W1217 21:29:25.428162  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:29:25.428168  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:29:25.428225  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:29:25.454038  666795 cri.go:89] found id: ""
	I1217 21:29:25.454071  666795 logs.go:282] 0 containers: []
	W1217 21:29:25.454079  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:29:25.454088  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:29:25.454143  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:29:25.479414  666795 cri.go:89] found id: ""
	I1217 21:29:25.479437  666795 logs.go:282] 0 containers: []
	W1217 21:29:25.479446  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:29:25.479452  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:29:25.479515  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:29:25.505548  666795 cri.go:89] found id: ""
	I1217 21:29:25.505571  666795 logs.go:282] 0 containers: []
	W1217 21:29:25.505579  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:29:25.505587  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:29:25.505599  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:29:25.521730  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:29:25.521759  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:29:25.590942  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:29:25.590971  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:29:25.590984  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:29:25.637277  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:29:25.637321  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:29:25.670826  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:29:25.670853  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:29:28.244431  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:29:28.255568  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:29:28.255692  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:29:28.295932  666795 cri.go:89] found id: ""
	I1217 21:29:28.295956  666795 logs.go:282] 0 containers: []
	W1217 21:29:28.295964  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:29:28.295971  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:29:28.296032  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:29:28.354036  666795 cri.go:89] found id: ""
	I1217 21:29:28.354059  666795 logs.go:282] 0 containers: []
	W1217 21:29:28.354068  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:29:28.354074  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:29:28.354132  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:29:28.420915  666795 cri.go:89] found id: ""
	I1217 21:29:28.420937  666795 logs.go:282] 0 containers: []
	W1217 21:29:28.420946  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:29:28.420953  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:29:28.421014  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:29:28.469828  666795 cri.go:89] found id: ""
	I1217 21:29:28.469850  666795 logs.go:282] 0 containers: []
	W1217 21:29:28.469859  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:29:28.469865  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:29:28.469922  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:29:28.504384  666795 cri.go:89] found id: ""
	I1217 21:29:28.504405  666795 logs.go:282] 0 containers: []
	W1217 21:29:28.504414  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:29:28.504420  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:29:28.504478  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:29:28.542202  666795 cri.go:89] found id: ""
	I1217 21:29:28.542226  666795 logs.go:282] 0 containers: []
	W1217 21:29:28.542234  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:29:28.542240  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:29:28.542295  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:29:28.586219  666795 cri.go:89] found id: ""
	I1217 21:29:28.586243  666795 logs.go:282] 0 containers: []
	W1217 21:29:28.586258  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:29:28.586265  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:29:28.586324  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:29:28.618327  666795 cri.go:89] found id: ""
	I1217 21:29:28.618350  666795 logs.go:282] 0 containers: []
	W1217 21:29:28.618358  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:29:28.618368  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:29:28.618380  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:29:28.672785  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:29:28.672813  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:29:28.746556  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:29:28.746596  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:29:28.763074  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:29:28.763106  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:29:28.827564  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:29:28.827604  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:29:28.827617  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:29:31.359699  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:29:31.388803  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:29:31.388875  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:29:31.427022  666795 cri.go:89] found id: ""
	I1217 21:29:31.427049  666795 logs.go:282] 0 containers: []
	W1217 21:29:31.427058  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:29:31.427065  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:29:31.427123  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:29:31.460117  666795 cri.go:89] found id: ""
	I1217 21:29:31.460147  666795 logs.go:282] 0 containers: []
	W1217 21:29:31.460156  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:29:31.460163  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:29:31.460225  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:29:31.490525  666795 cri.go:89] found id: ""
	I1217 21:29:31.490551  666795 logs.go:282] 0 containers: []
	W1217 21:29:31.490561  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:29:31.490567  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:29:31.490630  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:29:31.531696  666795 cri.go:89] found id: ""
	I1217 21:29:31.531718  666795 logs.go:282] 0 containers: []
	W1217 21:29:31.531726  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:29:31.531733  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:29:31.531790  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:29:31.582690  666795 cri.go:89] found id: ""
	I1217 21:29:31.582716  666795 logs.go:282] 0 containers: []
	W1217 21:29:31.582725  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:29:31.582731  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:29:31.582789  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:29:31.626933  666795 cri.go:89] found id: ""
	I1217 21:29:31.626955  666795 logs.go:282] 0 containers: []
	W1217 21:29:31.626965  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:29:31.626971  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:29:31.627036  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:29:31.656872  666795 cri.go:89] found id: ""
	I1217 21:29:31.656894  666795 logs.go:282] 0 containers: []
	W1217 21:29:31.656910  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:29:31.656917  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:29:31.656980  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:29:31.698876  666795 cri.go:89] found id: ""
	I1217 21:29:31.698897  666795 logs.go:282] 0 containers: []
	W1217 21:29:31.698906  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:29:31.698915  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:29:31.698926  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:29:31.751507  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:29:31.751616  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:29:31.790729  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:29:31.790753  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:29:31.873567  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:29:31.873647  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:29:31.892952  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:29:31.892978  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:29:31.975278  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:29:34.476180  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:29:34.486364  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:29:34.486434  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:29:34.516722  666795 cri.go:89] found id: ""
	I1217 21:29:34.516747  666795 logs.go:282] 0 containers: []
	W1217 21:29:34.516756  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:29:34.516762  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:29:34.516820  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:29:34.546514  666795 cri.go:89] found id: ""
	I1217 21:29:34.546539  666795 logs.go:282] 0 containers: []
	W1217 21:29:34.546547  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:29:34.546553  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:29:34.546613  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:29:34.574174  666795 cri.go:89] found id: ""
	I1217 21:29:34.574197  666795 logs.go:282] 0 containers: []
	W1217 21:29:34.574212  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:29:34.574219  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:29:34.574280  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:29:34.600848  666795 cri.go:89] found id: ""
	I1217 21:29:34.600871  666795 logs.go:282] 0 containers: []
	W1217 21:29:34.600880  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:29:34.600886  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:29:34.600946  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:29:34.628499  666795 cri.go:89] found id: ""
	I1217 21:29:34.628575  666795 logs.go:282] 0 containers: []
	W1217 21:29:34.628600  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:29:34.628620  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:29:34.628713  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:29:34.654606  666795 cri.go:89] found id: ""
	I1217 21:29:34.654632  666795 logs.go:282] 0 containers: []
	W1217 21:29:34.654642  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:29:34.654648  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:29:34.654706  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:29:34.682815  666795 cri.go:89] found id: ""
	I1217 21:29:34.682840  666795 logs.go:282] 0 containers: []
	W1217 21:29:34.682849  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:29:34.682855  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:29:34.682913  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:29:34.717856  666795 cri.go:89] found id: ""
	I1217 21:29:34.717881  666795 logs.go:282] 0 containers: []
	W1217 21:29:34.717890  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:29:34.717898  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:29:34.717910  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:29:34.810049  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:29:34.810146  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:29:34.810211  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:29:34.854218  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:29:34.854328  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:29:34.900260  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:29:34.900330  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:29:34.976221  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:29:34.976301  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:29:37.494200  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:29:37.504576  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:29:37.504653  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:29:37.531308  666795 cri.go:89] found id: ""
	I1217 21:29:37.531331  666795 logs.go:282] 0 containers: []
	W1217 21:29:37.531340  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:29:37.531346  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:29:37.531410  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:29:37.556653  666795 cri.go:89] found id: ""
	I1217 21:29:37.556674  666795 logs.go:282] 0 containers: []
	W1217 21:29:37.556683  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:29:37.556689  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:29:37.556745  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:29:37.582221  666795 cri.go:89] found id: ""
	I1217 21:29:37.582245  666795 logs.go:282] 0 containers: []
	W1217 21:29:37.582254  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:29:37.582260  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:29:37.582322  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:29:37.607532  666795 cri.go:89] found id: ""
	I1217 21:29:37.607555  666795 logs.go:282] 0 containers: []
	W1217 21:29:37.607564  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:29:37.607570  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:29:37.607652  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:29:37.634187  666795 cri.go:89] found id: ""
	I1217 21:29:37.634211  666795 logs.go:282] 0 containers: []
	W1217 21:29:37.634221  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:29:37.634228  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:29:37.634290  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:29:37.660176  666795 cri.go:89] found id: ""
	I1217 21:29:37.660199  666795 logs.go:282] 0 containers: []
	W1217 21:29:37.660208  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:29:37.660214  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:29:37.660318  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:29:37.688304  666795 cri.go:89] found id: ""
	I1217 21:29:37.688330  666795 logs.go:282] 0 containers: []
	W1217 21:29:37.688338  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:29:37.688345  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:29:37.688416  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:29:37.713297  666795 cri.go:89] found id: ""
	I1217 21:29:37.713318  666795 logs.go:282] 0 containers: []
	W1217 21:29:37.713327  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:29:37.713336  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:29:37.713348  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:29:37.745296  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:29:37.745331  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:29:37.775715  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:29:37.775740  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:29:37.847192  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:29:37.847237  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:29:37.864886  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:29:37.864920  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:29:37.934784  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:29:40.435797  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:29:40.446879  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:29:40.446952  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:29:40.475201  666795 cri.go:89] found id: ""
	I1217 21:29:40.475226  666795 logs.go:282] 0 containers: []
	W1217 21:29:40.475234  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:29:40.475241  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:29:40.475299  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:29:40.501700  666795 cri.go:89] found id: ""
	I1217 21:29:40.501723  666795 logs.go:282] 0 containers: []
	W1217 21:29:40.501731  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:29:40.501738  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:29:40.501807  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:29:40.530541  666795 cri.go:89] found id: ""
	I1217 21:29:40.530574  666795 logs.go:282] 0 containers: []
	W1217 21:29:40.530584  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:29:40.530590  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:29:40.530650  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:29:40.558135  666795 cri.go:89] found id: ""
	I1217 21:29:40.558159  666795 logs.go:282] 0 containers: []
	W1217 21:29:40.558168  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:29:40.558174  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:29:40.558230  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:29:40.585794  666795 cri.go:89] found id: ""
	I1217 21:29:40.585817  666795 logs.go:282] 0 containers: []
	W1217 21:29:40.585826  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:29:40.585832  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:29:40.585893  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:29:40.616255  666795 cri.go:89] found id: ""
	I1217 21:29:40.616278  666795 logs.go:282] 0 containers: []
	W1217 21:29:40.616286  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:29:40.616293  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:29:40.616351  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:29:40.644521  666795 cri.go:89] found id: ""
	I1217 21:29:40.644544  666795 logs.go:282] 0 containers: []
	W1217 21:29:40.644552  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:29:40.644558  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:29:40.644619  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:29:40.670043  666795 cri.go:89] found id: ""
	I1217 21:29:40.670064  666795 logs.go:282] 0 containers: []
	W1217 21:29:40.670073  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:29:40.670082  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:29:40.670094  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:29:40.738678  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:29:40.738712  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:29:40.754897  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:29:40.754928  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:29:40.821604  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:29:40.821625  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:29:40.821638  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:29:40.855062  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:29:40.855104  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:29:43.387803  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:29:43.397831  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:29:43.397899  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:29:43.427745  666795 cri.go:89] found id: ""
	I1217 21:29:43.427769  666795 logs.go:282] 0 containers: []
	W1217 21:29:43.427778  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:29:43.427785  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:29:43.427845  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:29:43.457985  666795 cri.go:89] found id: ""
	I1217 21:29:43.458009  666795 logs.go:282] 0 containers: []
	W1217 21:29:43.458019  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:29:43.458026  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:29:43.458087  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:29:43.485130  666795 cri.go:89] found id: ""
	I1217 21:29:43.485157  666795 logs.go:282] 0 containers: []
	W1217 21:29:43.485166  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:29:43.485173  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:29:43.485233  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:29:43.515451  666795 cri.go:89] found id: ""
	I1217 21:29:43.515477  666795 logs.go:282] 0 containers: []
	W1217 21:29:43.515487  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:29:43.515493  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:29:43.515554  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:29:43.544542  666795 cri.go:89] found id: ""
	I1217 21:29:43.544567  666795 logs.go:282] 0 containers: []
	W1217 21:29:43.544577  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:29:43.544584  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:29:43.544649  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:29:43.571460  666795 cri.go:89] found id: ""
	I1217 21:29:43.571482  666795 logs.go:282] 0 containers: []
	W1217 21:29:43.571491  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:29:43.571497  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:29:43.571563  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:29:43.598061  666795 cri.go:89] found id: ""
	I1217 21:29:43.598087  666795 logs.go:282] 0 containers: []
	W1217 21:29:43.598102  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:29:43.598109  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:29:43.598166  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:29:43.624575  666795 cri.go:89] found id: ""
	I1217 21:29:43.624598  666795 logs.go:282] 0 containers: []
	W1217 21:29:43.624607  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:29:43.624615  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:29:43.624627  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:29:43.657382  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:29:43.657418  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:29:43.689363  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:29:43.689393  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:29:43.757518  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:29:43.757555  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:29:43.775561  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:29:43.775648  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:29:43.840753  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:29:46.342193  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:29:46.352178  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:29:46.352300  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:29:46.388689  666795 cri.go:89] found id: ""
	I1217 21:29:46.388717  666795 logs.go:282] 0 containers: []
	W1217 21:29:46.388725  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:29:46.388733  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:29:46.388791  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:29:46.415133  666795 cri.go:89] found id: ""
	I1217 21:29:46.415154  666795 logs.go:282] 0 containers: []
	W1217 21:29:46.415163  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:29:46.415169  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:29:46.415229  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:29:46.441266  666795 cri.go:89] found id: ""
	I1217 21:29:46.441293  666795 logs.go:282] 0 containers: []
	W1217 21:29:46.441302  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:29:46.441308  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:29:46.441367  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:29:46.466224  666795 cri.go:89] found id: ""
	I1217 21:29:46.466244  666795 logs.go:282] 0 containers: []
	W1217 21:29:46.466252  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:29:46.466259  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:29:46.466317  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:29:46.490606  666795 cri.go:89] found id: ""
	I1217 21:29:46.490629  666795 logs.go:282] 0 containers: []
	W1217 21:29:46.490637  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:29:46.490643  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:29:46.490699  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:29:46.517492  666795 cri.go:89] found id: ""
	I1217 21:29:46.517514  666795 logs.go:282] 0 containers: []
	W1217 21:29:46.517522  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:29:46.517528  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:29:46.517584  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:29:46.543849  666795 cri.go:89] found id: ""
	I1217 21:29:46.543871  666795 logs.go:282] 0 containers: []
	W1217 21:29:46.543879  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:29:46.543885  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:29:46.543941  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:29:46.569451  666795 cri.go:89] found id: ""
	I1217 21:29:46.569473  666795 logs.go:282] 0 containers: []
	W1217 21:29:46.569482  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:29:46.569491  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:29:46.569502  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:29:46.636613  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:29:46.636647  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:29:46.653791  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:29:46.653822  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:29:46.725375  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:29:46.725397  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:29:46.725410  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:29:46.756829  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:29:46.756865  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:29:49.286202  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:29:49.296466  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:29:49.296552  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:29:49.332732  666795 cri.go:89] found id: ""
	I1217 21:29:49.332759  666795 logs.go:282] 0 containers: []
	W1217 21:29:49.332768  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:29:49.332774  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:29:49.332839  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:29:49.373868  666795 cri.go:89] found id: ""
	I1217 21:29:49.373896  666795 logs.go:282] 0 containers: []
	W1217 21:29:49.373905  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:29:49.373911  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:29:49.373978  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:29:49.402425  666795 cri.go:89] found id: ""
	I1217 21:29:49.402454  666795 logs.go:282] 0 containers: []
	W1217 21:29:49.402463  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:29:49.402468  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:29:49.402527  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:29:49.428292  666795 cri.go:89] found id: ""
	I1217 21:29:49.428316  666795 logs.go:282] 0 containers: []
	W1217 21:29:49.428325  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:29:49.428332  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:29:49.428391  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:29:49.455169  666795 cri.go:89] found id: ""
	I1217 21:29:49.455249  666795 logs.go:282] 0 containers: []
	W1217 21:29:49.455273  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:29:49.455296  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:29:49.455378  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:29:49.483172  666795 cri.go:89] found id: ""
	I1217 21:29:49.483194  666795 logs.go:282] 0 containers: []
	W1217 21:29:49.483204  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:29:49.483210  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:29:49.483270  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:29:49.513696  666795 cri.go:89] found id: ""
	I1217 21:29:49.513720  666795 logs.go:282] 0 containers: []
	W1217 21:29:49.513729  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:29:49.513736  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:29:49.513815  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:29:49.539758  666795 cri.go:89] found id: ""
	I1217 21:29:49.539781  666795 logs.go:282] 0 containers: []
	W1217 21:29:49.539790  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:29:49.539798  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:29:49.539811  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:29:49.607632  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:29:49.607669  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:29:49.624327  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:29:49.624358  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:29:49.690264  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:29:49.690283  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:29:49.690297  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:29:49.722199  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:29:49.722229  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:29:52.252921  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:29:52.263626  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:29:52.263695  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:29:52.307838  666795 cri.go:89] found id: ""
	I1217 21:29:52.307862  666795 logs.go:282] 0 containers: []
	W1217 21:29:52.307871  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:29:52.307886  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:29:52.307952  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:29:52.372966  666795 cri.go:89] found id: ""
	I1217 21:29:52.373008  666795 logs.go:282] 0 containers: []
	W1217 21:29:52.373018  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:29:52.373024  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:29:52.373096  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:29:52.427679  666795 cri.go:89] found id: ""
	I1217 21:29:52.427712  666795 logs.go:282] 0 containers: []
	W1217 21:29:52.427721  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:29:52.427730  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:29:52.427791  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:29:52.468691  666795 cri.go:89] found id: ""
	I1217 21:29:52.468715  666795 logs.go:282] 0 containers: []
	W1217 21:29:52.468724  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:29:52.468730  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:29:52.468794  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:29:52.500640  666795 cri.go:89] found id: ""
	I1217 21:29:52.500663  666795 logs.go:282] 0 containers: []
	W1217 21:29:52.500672  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:29:52.500678  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:29:52.500736  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:29:52.527376  666795 cri.go:89] found id: ""
	I1217 21:29:52.527404  666795 logs.go:282] 0 containers: []
	W1217 21:29:52.527413  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:29:52.527419  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:29:52.527481  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:29:52.552163  666795 cri.go:89] found id: ""
	I1217 21:29:52.552186  666795 logs.go:282] 0 containers: []
	W1217 21:29:52.552195  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:29:52.552202  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:29:52.552267  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:29:52.578808  666795 cri.go:89] found id: ""
	I1217 21:29:52.578833  666795 logs.go:282] 0 containers: []
	W1217 21:29:52.578843  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:29:52.578851  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:29:52.578862  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:29:52.647204  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:29:52.647243  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:29:52.664852  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:29:52.664880  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:29:52.732863  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:29:52.732884  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:29:52.732900  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:29:52.763840  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:29:52.763875  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:29:55.296823  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:29:55.308144  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:29:55.308221  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:29:55.346238  666795 cri.go:89] found id: ""
	I1217 21:29:55.346267  666795 logs.go:282] 0 containers: []
	W1217 21:29:55.346276  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:29:55.346282  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:29:55.346339  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:29:55.411917  666795 cri.go:89] found id: ""
	I1217 21:29:55.411945  666795 logs.go:282] 0 containers: []
	W1217 21:29:55.411954  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:29:55.411960  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:29:55.412017  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:29:55.477006  666795 cri.go:89] found id: ""
	I1217 21:29:55.477034  666795 logs.go:282] 0 containers: []
	W1217 21:29:55.477043  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:29:55.477050  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:29:55.477109  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:29:55.516890  666795 cri.go:89] found id: ""
	I1217 21:29:55.516926  666795 logs.go:282] 0 containers: []
	W1217 21:29:55.516936  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:29:55.516943  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:29:55.517013  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:29:55.554053  666795 cri.go:89] found id: ""
	I1217 21:29:55.554085  666795 logs.go:282] 0 containers: []
	W1217 21:29:55.554095  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:29:55.554101  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:29:55.554175  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:29:55.600566  666795 cri.go:89] found id: ""
	I1217 21:29:55.600600  666795 logs.go:282] 0 containers: []
	W1217 21:29:55.600609  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:29:55.600616  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:29:55.600674  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:29:55.635743  666795 cri.go:89] found id: ""
	I1217 21:29:55.635772  666795 logs.go:282] 0 containers: []
	W1217 21:29:55.635780  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:29:55.635786  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:29:55.635853  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:29:55.670018  666795 cri.go:89] found id: ""
	I1217 21:29:55.670047  666795 logs.go:282] 0 containers: []
	W1217 21:29:55.670056  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:29:55.670065  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:29:55.670077  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:29:55.748499  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:29:55.748533  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:29:55.768075  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:29:55.768107  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:29:55.863571  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:29:55.863610  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:29:55.863624  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:29:55.899274  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:29:55.899303  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:29:58.455984  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:29:58.466201  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:29:58.466259  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:29:58.502696  666795 cri.go:89] found id: ""
	I1217 21:29:58.502718  666795 logs.go:282] 0 containers: []
	W1217 21:29:58.502727  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:29:58.502735  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:29:58.502792  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:29:58.540072  666795 cri.go:89] found id: ""
	I1217 21:29:58.540095  666795 logs.go:282] 0 containers: []
	W1217 21:29:58.540104  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:29:58.540111  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:29:58.540175  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:29:58.584556  666795 cri.go:89] found id: ""
	I1217 21:29:58.584578  666795 logs.go:282] 0 containers: []
	W1217 21:29:58.584587  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:29:58.584593  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:29:58.584650  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:29:58.649629  666795 cri.go:89] found id: ""
	I1217 21:29:58.649651  666795 logs.go:282] 0 containers: []
	W1217 21:29:58.649660  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:29:58.649666  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:29:58.649725  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:29:58.687322  666795 cri.go:89] found id: ""
	I1217 21:29:58.687344  666795 logs.go:282] 0 containers: []
	W1217 21:29:58.687352  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:29:58.687358  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:29:58.687415  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:29:58.718687  666795 cri.go:89] found id: ""
	I1217 21:29:58.718708  666795 logs.go:282] 0 containers: []
	W1217 21:29:58.718717  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:29:58.718723  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:29:58.718779  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:29:58.760299  666795 cri.go:89] found id: ""
	I1217 21:29:58.760322  666795 logs.go:282] 0 containers: []
	W1217 21:29:58.760330  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:29:58.760337  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:29:58.760397  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:29:58.797919  666795 cri.go:89] found id: ""
	I1217 21:29:58.797942  666795 logs.go:282] 0 containers: []
	W1217 21:29:58.797956  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:29:58.797965  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:29:58.797977  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:29:58.878207  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:29:58.878361  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:29:58.896522  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:29:58.896548  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:29:58.998498  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:29:58.998574  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:29:58.998612  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:29:59.037090  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:29:59.037171  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:30:01.568316  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:30:01.578897  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:30:01.578979  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:30:01.607656  666795 cri.go:89] found id: ""
	I1217 21:30:01.607685  666795 logs.go:282] 0 containers: []
	W1217 21:30:01.607695  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:30:01.607702  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:30:01.607768  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:30:01.637213  666795 cri.go:89] found id: ""
	I1217 21:30:01.637245  666795 logs.go:282] 0 containers: []
	W1217 21:30:01.637275  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:30:01.637284  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:30:01.637380  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:30:01.667174  666795 cri.go:89] found id: ""
	I1217 21:30:01.667203  666795 logs.go:282] 0 containers: []
	W1217 21:30:01.667213  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:30:01.667219  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:30:01.667281  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:30:01.696028  666795 cri.go:89] found id: ""
	I1217 21:30:01.696055  666795 logs.go:282] 0 containers: []
	W1217 21:30:01.696066  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:30:01.696074  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:30:01.696154  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:30:01.727883  666795 cri.go:89] found id: ""
	I1217 21:30:01.727934  666795 logs.go:282] 0 containers: []
	W1217 21:30:01.727950  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:30:01.727957  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:30:01.728046  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:30:01.768875  666795 cri.go:89] found id: ""
	I1217 21:30:01.768900  666795 logs.go:282] 0 containers: []
	W1217 21:30:01.768910  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:30:01.768917  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:30:01.768981  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:30:01.795710  666795 cri.go:89] found id: ""
	I1217 21:30:01.795738  666795 logs.go:282] 0 containers: []
	W1217 21:30:01.795746  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:30:01.795753  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:30:01.795814  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:30:01.831970  666795 cri.go:89] found id: ""
	I1217 21:30:01.831994  666795 logs.go:282] 0 containers: []
	W1217 21:30:01.832002  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:30:01.832012  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:30:01.832024  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:30:01.909406  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:30:01.909482  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:30:01.937062  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:30:01.937097  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:30:02.035761  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:30:02.035782  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:30:02.035795  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:30:02.073767  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:30:02.073850  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:30:04.624665  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:30:04.634542  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:30:04.634614  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:30:04.660468  666795 cri.go:89] found id: ""
	I1217 21:30:04.660490  666795 logs.go:282] 0 containers: []
	W1217 21:30:04.660503  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:30:04.660509  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:30:04.660570  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:30:04.686438  666795 cri.go:89] found id: ""
	I1217 21:30:04.686464  666795 logs.go:282] 0 containers: []
	W1217 21:30:04.686473  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:30:04.686479  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:30:04.686569  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:30:04.714545  666795 cri.go:89] found id: ""
	I1217 21:30:04.714567  666795 logs.go:282] 0 containers: []
	W1217 21:30:04.714576  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:30:04.714582  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:30:04.714644  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:30:04.747316  666795 cri.go:89] found id: ""
	I1217 21:30:04.747344  666795 logs.go:282] 0 containers: []
	W1217 21:30:04.747354  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:30:04.747360  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:30:04.747419  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:30:04.774245  666795 cri.go:89] found id: ""
	I1217 21:30:04.774271  666795 logs.go:282] 0 containers: []
	W1217 21:30:04.774280  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:30:04.774287  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:30:04.774346  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:30:04.799748  666795 cri.go:89] found id: ""
	I1217 21:30:04.799771  666795 logs.go:282] 0 containers: []
	W1217 21:30:04.799779  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:30:04.799786  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:30:04.799847  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:30:04.829927  666795 cri.go:89] found id: ""
	I1217 21:30:04.829947  666795 logs.go:282] 0 containers: []
	W1217 21:30:04.829956  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:30:04.829962  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:30:04.830023  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:30:04.860680  666795 cri.go:89] found id: ""
	I1217 21:30:04.860702  666795 logs.go:282] 0 containers: []
	W1217 21:30:04.860710  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:30:04.860719  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:30:04.860731  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:30:04.878628  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:30:04.878710  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:30:04.943250  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:30:04.943272  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:30:04.943291  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:30:04.974141  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:30:04.974173  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:30:05.008983  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:30:05.009014  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:30:07.579900  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:30:07.591189  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:30:07.591267  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:30:07.621436  666795 cri.go:89] found id: ""
	I1217 21:30:07.621459  666795 logs.go:282] 0 containers: []
	W1217 21:30:07.621468  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:30:07.621474  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:30:07.621532  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:30:07.652004  666795 cri.go:89] found id: ""
	I1217 21:30:07.652026  666795 logs.go:282] 0 containers: []
	W1217 21:30:07.652035  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:30:07.652041  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:30:07.652112  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:30:07.684131  666795 cri.go:89] found id: ""
	I1217 21:30:07.684153  666795 logs.go:282] 0 containers: []
	W1217 21:30:07.684162  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:30:07.684167  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:30:07.684239  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:30:07.719401  666795 cri.go:89] found id: ""
	I1217 21:30:07.719491  666795 logs.go:282] 0 containers: []
	W1217 21:30:07.719516  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:30:07.719541  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:30:07.719685  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:30:07.757426  666795 cri.go:89] found id: ""
	I1217 21:30:07.757448  666795 logs.go:282] 0 containers: []
	W1217 21:30:07.757456  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:30:07.757463  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:30:07.757519  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:30:07.793614  666795 cri.go:89] found id: ""
	I1217 21:30:07.793792  666795 logs.go:282] 0 containers: []
	W1217 21:30:07.793808  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:30:07.793815  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:30:07.793888  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:30:07.848066  666795 cri.go:89] found id: ""
	I1217 21:30:07.848090  666795 logs.go:282] 0 containers: []
	W1217 21:30:07.848100  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:30:07.848106  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:30:07.848165  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:30:07.892505  666795 cri.go:89] found id: ""
	I1217 21:30:07.892529  666795 logs.go:282] 0 containers: []
	W1217 21:30:07.892539  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:30:07.892548  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:30:07.892560  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:30:07.962976  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:30:07.963018  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:30:07.980274  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:30:07.980302  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:30:08.059110  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:30:08.059179  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:30:08.059207  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:30:08.090674  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:30:08.090708  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:30:10.621303  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:30:10.631680  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:30:10.631753  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:30:10.666201  666795 cri.go:89] found id: ""
	I1217 21:30:10.666225  666795 logs.go:282] 0 containers: []
	W1217 21:30:10.666233  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:30:10.666241  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:30:10.666301  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:30:10.697080  666795 cri.go:89] found id: ""
	I1217 21:30:10.697164  666795 logs.go:282] 0 containers: []
	W1217 21:30:10.697183  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:30:10.697190  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:30:10.697252  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:30:10.727233  666795 cri.go:89] found id: ""
	I1217 21:30:10.727262  666795 logs.go:282] 0 containers: []
	W1217 21:30:10.727271  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:30:10.727277  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:30:10.727334  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:30:10.753135  666795 cri.go:89] found id: ""
	I1217 21:30:10.753203  666795 logs.go:282] 0 containers: []
	W1217 21:30:10.753217  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:30:10.753224  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:30:10.753286  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:30:10.782596  666795 cri.go:89] found id: ""
	I1217 21:30:10.782619  666795 logs.go:282] 0 containers: []
	W1217 21:30:10.782628  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:30:10.782634  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:30:10.782703  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:30:10.808929  666795 cri.go:89] found id: ""
	I1217 21:30:10.808954  666795 logs.go:282] 0 containers: []
	W1217 21:30:10.808964  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:30:10.808971  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:30:10.809028  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:30:10.847926  666795 cri.go:89] found id: ""
	I1217 21:30:10.847955  666795 logs.go:282] 0 containers: []
	W1217 21:30:10.847966  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:30:10.847973  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:30:10.848050  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:30:10.887638  666795 cri.go:89] found id: ""
	I1217 21:30:10.887665  666795 logs.go:282] 0 containers: []
	W1217 21:30:10.887675  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:30:10.887684  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:30:10.887696  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:30:10.923325  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:30:10.923354  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:30:10.992033  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:30:10.992071  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:30:11.010172  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:30:11.010256  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:30:11.073690  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:30:11.073712  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:30:11.073726  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:30:13.607696  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:30:13.617954  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:30:13.618031  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:30:13.647457  666795 cri.go:89] found id: ""
	I1217 21:30:13.647481  666795 logs.go:282] 0 containers: []
	W1217 21:30:13.647489  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:30:13.647495  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:30:13.647555  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:30:13.676581  666795 cri.go:89] found id: ""
	I1217 21:30:13.676605  666795 logs.go:282] 0 containers: []
	W1217 21:30:13.676614  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:30:13.676620  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:30:13.676680  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:30:13.709681  666795 cri.go:89] found id: ""
	I1217 21:30:13.709706  666795 logs.go:282] 0 containers: []
	W1217 21:30:13.709714  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:30:13.709720  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:30:13.709779  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:30:13.742771  666795 cri.go:89] found id: ""
	I1217 21:30:13.742791  666795 logs.go:282] 0 containers: []
	W1217 21:30:13.742799  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:30:13.742805  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:30:13.742858  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:30:13.786075  666795 cri.go:89] found id: ""
	I1217 21:30:13.786098  666795 logs.go:282] 0 containers: []
	W1217 21:30:13.786106  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:30:13.786112  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:30:13.786188  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:30:13.842972  666795 cri.go:89] found id: ""
	I1217 21:30:13.843030  666795 logs.go:282] 0 containers: []
	W1217 21:30:13.843039  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:30:13.843047  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:30:13.843112  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:30:13.902763  666795 cri.go:89] found id: ""
	I1217 21:30:13.902787  666795 logs.go:282] 0 containers: []
	W1217 21:30:13.902796  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:30:13.902802  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:30:13.902875  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:30:13.956564  666795 cri.go:89] found id: ""
	I1217 21:30:13.956587  666795 logs.go:282] 0 containers: []
	W1217 21:30:13.956596  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:30:13.956605  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:30:13.956617  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:30:14.035304  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:30:14.035380  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:30:14.058877  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:30:14.058902  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:30:14.144673  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:30:14.144746  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:30:14.144783  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:30:14.181458  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:30:14.181529  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:30:16.716269  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:30:16.726469  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:30:16.726545  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:30:16.752081  666795 cri.go:89] found id: ""
	I1217 21:30:16.752109  666795 logs.go:282] 0 containers: []
	W1217 21:30:16.752119  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:30:16.752126  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:30:16.752184  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:30:16.778877  666795 cri.go:89] found id: ""
	I1217 21:30:16.778902  666795 logs.go:282] 0 containers: []
	W1217 21:30:16.778911  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:30:16.778917  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:30:16.778987  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:30:16.805306  666795 cri.go:89] found id: ""
	I1217 21:30:16.805330  666795 logs.go:282] 0 containers: []
	W1217 21:30:16.805340  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:30:16.805346  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:30:16.805403  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:30:16.844821  666795 cri.go:89] found id: ""
	I1217 21:30:16.844850  666795 logs.go:282] 0 containers: []
	W1217 21:30:16.844859  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:30:16.844865  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:30:16.844930  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:30:16.881595  666795 cri.go:89] found id: ""
	I1217 21:30:16.881624  666795 logs.go:282] 0 containers: []
	W1217 21:30:16.881634  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:30:16.881640  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:30:16.881713  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:30:16.912398  666795 cri.go:89] found id: ""
	I1217 21:30:16.912428  666795 logs.go:282] 0 containers: []
	W1217 21:30:16.912438  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:30:16.912445  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:30:16.912505  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:30:16.942229  666795 cri.go:89] found id: ""
	I1217 21:30:16.942256  666795 logs.go:282] 0 containers: []
	W1217 21:30:16.942265  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:30:16.942271  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:30:16.942329  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:30:16.969473  666795 cri.go:89] found id: ""
	I1217 21:30:16.969496  666795 logs.go:282] 0 containers: []
	W1217 21:30:16.969505  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:30:16.969514  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:30:16.969527  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:30:17.050366  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:30:17.050386  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:30:17.050399  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:30:17.080772  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:30:17.080809  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:30:17.110838  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:30:17.110871  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 21:30:17.178393  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:30:17.178434  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:30:19.698879  666795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:30:19.708991  666795 kubeadm.go:602] duration metric: took 4m5.509830115s to restartPrimaryControlPlane
	W1217 21:30:19.709058  666795 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 21:30:19.709122  666795 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 21:30:20.195284  666795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 21:30:20.208769  666795 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 21:30:20.219405  666795 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 21:30:20.219473  666795 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 21:30:20.228421  666795 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 21:30:20.228448  666795 kubeadm.go:158] found existing configuration files:
	
	I1217 21:30:20.228502  666795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 21:30:20.240021  666795 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 21:30:20.240086  666795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 21:30:20.250134  666795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 21:30:20.263951  666795 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 21:30:20.264020  666795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 21:30:20.280752  666795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 21:30:20.289367  666795 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 21:30:20.289438  666795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 21:30:20.297402  666795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 21:30:20.306300  666795 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 21:30:20.306381  666795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 21:30:20.314439  666795 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 21:30:20.372714  666795 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 21:30:20.372778  666795 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 21:30:20.470955  666795 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 21:30:20.471049  666795 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 21:30:20.471101  666795 kubeadm.go:319] OS: Linux
	I1217 21:30:20.471153  666795 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 21:30:20.471206  666795 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 21:30:20.471256  666795 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 21:30:20.471308  666795 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 21:30:20.471359  666795 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 21:30:20.471417  666795 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 21:30:20.471466  666795 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 21:30:20.471518  666795 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 21:30:20.471568  666795 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 21:30:20.554380  666795 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 21:30:20.554496  666795 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 21:30:20.554590  666795 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 21:30:20.579023  666795 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 21:30:20.584717  666795 out.go:252]   - Generating certificates and keys ...
	I1217 21:30:20.584822  666795 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 21:30:20.584939  666795 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 21:30:20.585025  666795 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 21:30:20.585087  666795 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 21:30:20.585157  666795 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 21:30:20.585975  666795 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 21:30:20.587403  666795 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 21:30:20.588606  666795 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 21:30:20.589577  666795 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 21:30:20.590453  666795 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 21:30:20.591319  666795 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 21:30:20.591443  666795 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 21:30:20.816259  666795 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 21:30:21.164108  666795 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 21:30:21.321681  666795 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 21:30:21.649108  666795 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 21:30:22.085313  666795 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 21:30:22.086067  666795 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 21:30:22.089728  666795 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 21:30:22.093218  666795 out.go:252]   - Booting up control plane ...
	I1217 21:30:22.093327  666795 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 21:30:22.093419  666795 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 21:30:22.095318  666795 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 21:30:22.112754  666795 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 21:30:22.113470  666795 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 21:30:22.124554  666795 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 21:30:22.124952  666795 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 21:30:22.125186  666795 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 21:30:22.260028  666795 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 21:30:22.260158  666795 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 21:34:22.260231  666795 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000316582s
	I1217 21:34:22.260265  666795 kubeadm.go:319] 
	I1217 21:34:22.260324  666795 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 21:34:22.260363  666795 kubeadm.go:319] 	- The kubelet is not running
	I1217 21:34:22.260472  666795 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 21:34:22.260481  666795 kubeadm.go:319] 
	I1217 21:34:22.260585  666795 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 21:34:22.260622  666795 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 21:34:22.260657  666795 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 21:34:22.260665  666795 kubeadm.go:319] 
	I1217 21:34:22.268777  666795 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 21:34:22.269237  666795 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 21:34:22.269359  666795 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 21:34:22.269616  666795 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 21:34:22.269625  666795 kubeadm.go:319] 
	I1217 21:34:22.269698  666795 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 21:34:22.269818  666795 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000316582s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000316582s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 21:34:22.269899  666795 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 21:34:22.699275  666795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 21:34:22.712542  666795 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 21:34:22.712603  666795 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 21:34:22.720615  666795 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 21:34:22.720640  666795 kubeadm.go:158] found existing configuration files:
	
	I1217 21:34:22.720717  666795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 21:34:22.728473  666795 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 21:34:22.728539  666795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 21:34:22.736046  666795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 21:34:22.743704  666795 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 21:34:22.743769  666795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 21:34:22.751064  666795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 21:34:22.758844  666795 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 21:34:22.758941  666795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 21:34:22.767309  666795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 21:34:22.775542  666795 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 21:34:22.775647  666795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 21:34:22.783482  666795 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 21:34:22.826831  666795 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 21:34:22.826933  666795 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 21:34:22.900009  666795 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 21:34:22.900114  666795 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 21:34:22.900164  666795 kubeadm.go:319] OS: Linux
	I1217 21:34:22.900215  666795 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 21:34:22.900270  666795 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 21:34:22.900322  666795 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 21:34:22.900373  666795 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 21:34:22.900424  666795 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 21:34:22.900476  666795 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 21:34:22.900524  666795 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 21:34:22.900575  666795 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 21:34:22.900624  666795 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 21:34:22.967839  666795 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 21:34:22.967950  666795 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 21:34:22.968041  666795 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 21:34:22.980069  666795 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 21:34:22.985458  666795 out.go:252]   - Generating certificates and keys ...
	I1217 21:34:22.985621  666795 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 21:34:22.985737  666795 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 21:34:22.985873  666795 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 21:34:22.985977  666795 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 21:34:22.986105  666795 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 21:34:22.986216  666795 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 21:34:22.986295  666795 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 21:34:22.986359  666795 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 21:34:22.986436  666795 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 21:34:22.986509  666795 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 21:34:22.986548  666795 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 21:34:22.986604  666795 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 21:34:23.332823  666795 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 21:34:23.654897  666795 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 21:34:23.737802  666795 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 21:34:24.005370  666795 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 21:34:24.253061  666795 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 21:34:24.253842  666795 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 21:34:24.256507  666795 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 21:34:24.259814  666795 out.go:252]   - Booting up control plane ...
	I1217 21:34:24.259985  666795 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 21:34:24.260110  666795 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 21:34:24.260218  666795 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 21:34:24.277772  666795 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 21:34:24.277889  666795 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 21:34:24.286207  666795 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 21:34:24.286579  666795 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 21:34:24.286797  666795 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 21:34:24.441867  666795 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 21:34:24.442126  666795 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 21:38:24.443465  666795 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001439744s
	I1217 21:38:24.449936  666795 kubeadm.go:319] 
	I1217 21:38:24.450031  666795 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 21:38:24.450070  666795 kubeadm.go:319] 	- The kubelet is not running
	I1217 21:38:24.450202  666795 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 21:38:24.450208  666795 kubeadm.go:319] 
	I1217 21:38:24.450357  666795 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 21:38:24.450409  666795 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 21:38:24.450443  666795 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 21:38:24.450447  666795 kubeadm.go:319] 
	I1217 21:38:24.459286  666795 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 21:38:24.460014  666795 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 21:38:24.460341  666795 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 21:38:24.460648  666795 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 21:38:24.460655  666795 kubeadm.go:319] 
	I1217 21:38:24.460818  666795 kubeadm.go:403] duration metric: took 12m10.301025728s to StartCluster
	I1217 21:38:24.460989  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:38:24.461070  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:38:24.461154  666795 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 21:38:24.493756  666795 cri.go:89] found id: ""
	I1217 21:38:24.493798  666795 logs.go:282] 0 containers: []
	W1217 21:38:24.493808  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:38:24.493814  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:38:24.493886  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:38:24.521493  666795 cri.go:89] found id: ""
	I1217 21:38:24.521524  666795 logs.go:282] 0 containers: []
	W1217 21:38:24.521543  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:38:24.521549  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:38:24.521635  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:38:24.554181  666795 cri.go:89] found id: ""
	I1217 21:38:24.554208  666795 logs.go:282] 0 containers: []
	W1217 21:38:24.554217  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:38:24.554228  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:38:24.554286  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:38:24.605864  666795 cri.go:89] found id: ""
	I1217 21:38:24.605890  666795 logs.go:282] 0 containers: []
	W1217 21:38:24.605899  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:38:24.605905  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:38:24.605966  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:38:24.684707  666795 cri.go:89] found id: ""
	I1217 21:38:24.684734  666795 logs.go:282] 0 containers: []
	W1217 21:38:24.684743  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:38:24.684749  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:38:24.684810  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:38:24.736800  666795 cri.go:89] found id: ""
	I1217 21:38:24.736880  666795 logs.go:282] 0 containers: []
	W1217 21:38:24.736903  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:38:24.736922  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:38:24.737008  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:38:24.768479  666795 cri.go:89] found id: ""
	I1217 21:38:24.768582  666795 logs.go:282] 0 containers: []
	W1217 21:38:24.768611  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:38:24.768656  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:38:24.768790  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:38:24.805973  666795 cri.go:89] found id: ""
	I1217 21:38:24.806054  666795 logs.go:282] 0 containers: []
	W1217 21:38:24.806077  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:38:24.806106  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:38:24.806161  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:38:24.824207  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:38:24.824293  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:38:24.911297  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:38:24.911374  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:38:24.911412  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:38:24.949613  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:38:24.949710  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:38:24.990285  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:38:24.990366  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 21:38:25.077352  666795 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001439744s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 21:38:25.077470  666795 out.go:285] * 
	* 
	W1217 21:38:25.077694  666795 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001439744s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001439744s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 21:38:25.077757  666795 out.go:285] * 
	* 
	W1217 21:38:25.080228  666795 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 21:38:25.090819  666795 out.go:203] 
	W1217 21:38:25.094741  666795 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001439744s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001439744s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 21:38:25.094814  666795 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 21:38:25.094836  666795 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 21:38:25.098897  666795 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-342357 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-342357 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-342357 version --output=json: exit status 1 (132.261909ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.85.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-17 21:38:25.736877134 +0000 UTC m=+5221.371984360
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-342357
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-342357:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f8d61c96e6842575c6628c1c936be1f165b166003f3c809b0fdffd22427bc265",
	        "Created": "2025-12-17T21:25:31.407725175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 666926,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T21:26:01.709498233Z",
	            "FinishedAt": "2025-12-17T21:26:00.769927125Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/f8d61c96e6842575c6628c1c936be1f165b166003f3c809b0fdffd22427bc265/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f8d61c96e6842575c6628c1c936be1f165b166003f3c809b0fdffd22427bc265/hostname",
	        "HostsPath": "/var/lib/docker/containers/f8d61c96e6842575c6628c1c936be1f165b166003f3c809b0fdffd22427bc265/hosts",
	        "LogPath": "/var/lib/docker/containers/f8d61c96e6842575c6628c1c936be1f165b166003f3c809b0fdffd22427bc265/f8d61c96e6842575c6628c1c936be1f165b166003f3c809b0fdffd22427bc265-json.log",
	        "Name": "/kubernetes-upgrade-342357",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-342357:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-342357",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f8d61c96e6842575c6628c1c936be1f165b166003f3c809b0fdffd22427bc265",
	                "LowerDir": "/var/lib/docker/overlay2/92c2481877c23134a57b7f555a1553dee5ee5230ba71e3a7558756879ec16f2e-init/diff:/var/lib/docker/overlay2/c4759b344ecb109c83d66e4ef56c76903f1d1e597efb86ab2d2c7911b1130a8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/92c2481877c23134a57b7f555a1553dee5ee5230ba71e3a7558756879ec16f2e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/92c2481877c23134a57b7f555a1553dee5ee5230ba71e3a7558756879ec16f2e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/92c2481877c23134a57b7f555a1553dee5ee5230ba71e3a7558756879ec16f2e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-342357",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-342357/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-342357",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-342357",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-342357",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c7f98d69e41eaca854b7316a744c4232f9139c5aa24307b05db0d77a651cc259",
	            "SandboxKey": "/var/run/docker/netns/c7f98d69e41e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33393"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33394"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-342357": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:68:ef:ea:cc:b9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "18c396e2ac7b0fe7a6b149daa66d9867cd1b494e6d29246d773c3066b770c53d",
	                    "EndpointID": "b3c619c2f2885fcdce1e38a7c81fccc5b73aefe40be57caa95425082e7fc0864",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-342357",
	                        "f8d61c96e684"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-342357 -n kubernetes-upgrade-342357
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-342357 -n kubernetes-upgrade-342357: exit status 2 (502.717294ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-342357 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p kubernetes-upgrade-342357 logs -n 25: (1.15444241s)
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-185508 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                         │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:24 UTC │ 17 Dec 25 21:26 UTC │
	│ start   │ -p missing-upgrade-783783 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ missing-upgrade-783783    │ jenkins │ v1.37.0 │ 17 Dec 25 21:24 UTC │ 17 Dec 25 21:25 UTC │
	│ delete  │ -p missing-upgrade-783783                                                                                                                     │ missing-upgrade-783783    │ jenkins │ v1.37.0 │ 17 Dec 25 21:25 UTC │ 17 Dec 25 21:25 UTC │
	│ start   │ -p kubernetes-upgrade-342357 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio      │ kubernetes-upgrade-342357 │ jenkins │ v1.37.0 │ 17 Dec 25 21:25 UTC │ 17 Dec 25 21:25 UTC │
	│ stop    │ -p kubernetes-upgrade-342357                                                                                                                  │ kubernetes-upgrade-342357 │ jenkins │ v1.37.0 │ 17 Dec 25 21:25 UTC │ 17 Dec 25 21:26 UTC │
	│ start   │ -p kubernetes-upgrade-342357 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-342357 │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │                     │
	│ delete  │ -p NoKubernetes-185508                                                                                                                        │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │ 17 Dec 25 21:26 UTC │
	│ start   │ -p NoKubernetes-185508 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                         │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │ 17 Dec 25 21:26 UTC │
	│ ssh     │ -p NoKubernetes-185508 sudo systemctl is-active --quiet service kubelet                                                                       │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │                     │
	│ stop    │ -p NoKubernetes-185508                                                                                                                        │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │ 17 Dec 25 21:26 UTC │
	│ start   │ -p NoKubernetes-185508 --driver=docker  --container-runtime=crio                                                                              │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │ 17 Dec 25 21:26 UTC │
	│ ssh     │ -p NoKubernetes-185508 sudo systemctl is-active --quiet service kubelet                                                                       │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │                     │
	│ delete  │ -p NoKubernetes-185508                                                                                                                        │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │ 17 Dec 25 21:26 UTC │
	│ start   │ -p stopped-upgrade-993252 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                          │ stopped-upgrade-993252    │ jenkins │ v1.35.0 │ 17 Dec 25 21:26 UTC │ 17 Dec 25 21:27 UTC │
	│ stop    │ stopped-upgrade-993252 stop                                                                                                                   │ stopped-upgrade-993252    │ jenkins │ v1.35.0 │ 17 Dec 25 21:27 UTC │ 17 Dec 25 21:27 UTC │
	│ start   │ -p stopped-upgrade-993252 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ stopped-upgrade-993252    │ jenkins │ v1.37.0 │ 17 Dec 25 21:27 UTC │ 17 Dec 25 21:31 UTC │
	│ delete  │ -p stopped-upgrade-993252                                                                                                                     │ stopped-upgrade-993252    │ jenkins │ v1.37.0 │ 17 Dec 25 21:31 UTC │ 17 Dec 25 21:31 UTC │
	│ start   │ -p running-upgrade-206976 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                          │ running-upgrade-206976    │ jenkins │ v1.35.0 │ 17 Dec 25 21:31 UTC │ 17 Dec 25 21:32 UTC │
	│ start   │ -p running-upgrade-206976 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ running-upgrade-206976    │ jenkins │ v1.37.0 │ 17 Dec 25 21:32 UTC │ 17 Dec 25 21:36 UTC │
	│ delete  │ -p running-upgrade-206976                                                                                                                     │ running-upgrade-206976    │ jenkins │ v1.37.0 │ 17 Dec 25 21:36 UTC │ 17 Dec 25 21:36 UTC │
	│ start   │ -p pause-918446 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                     │ pause-918446              │ jenkins │ v1.37.0 │ 17 Dec 25 21:36 UTC │ 17 Dec 25 21:37 UTC │
	│ start   │ -p pause-918446 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                              │ pause-918446              │ jenkins │ v1.37.0 │ 17 Dec 25 21:37 UTC │ 17 Dec 25 21:37 UTC │
	│ pause   │ -p pause-918446 --alsologtostderr -v=5                                                                                                        │ pause-918446              │ jenkins │ v1.37.0 │ 17 Dec 25 21:37 UTC │                     │
	│ delete  │ -p pause-918446                                                                                                                               │ pause-918446              │ jenkins │ v1.37.0 │ 17 Dec 25 21:37 UTC │ 17 Dec 25 21:38 UTC │
	│ start   │ -p force-systemd-flag-529066 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                   │ force-systemd-flag-529066 │ jenkins │ v1.37.0 │ 17 Dec 25 21:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 21:38:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 21:38:01.665202  708397 out.go:360] Setting OutFile to fd 1 ...
	I1217 21:38:01.665439  708397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:38:01.665474  708397 out.go:374] Setting ErrFile to fd 2...
	I1217 21:38:01.665495  708397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:38:01.665881  708397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 21:38:01.666386  708397 out.go:368] Setting JSON to false
	I1217 21:38:01.667338  708397 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15631,"bootTime":1765991851,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 21:38:01.667450  708397 start.go:143] virtualization:  
	I1217 21:38:01.671107  708397 out.go:179] * [force-systemd-flag-529066] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 21:38:01.675550  708397 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 21:38:01.675626  708397 notify.go:221] Checking for updates...
	I1217 21:38:01.679492  708397 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 21:38:01.683038  708397 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 21:38:01.686339  708397 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 21:38:01.689503  708397 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 21:38:01.692724  708397 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 21:38:01.696388  708397 config.go:182] Loaded profile config "kubernetes-upgrade-342357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 21:38:01.696552  708397 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 21:38:01.733087  708397 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 21:38:01.733203  708397 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 21:38:01.800665  708397 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 21:38:01.790535355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 21:38:01.800778  708397 docker.go:319] overlay module found
	I1217 21:38:01.806135  708397 out.go:179] * Using the docker driver based on user configuration
	I1217 21:38:01.809046  708397 start.go:309] selected driver: docker
	I1217 21:38:01.809070  708397 start.go:927] validating driver "docker" against <nil>
	I1217 21:38:01.809085  708397 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 21:38:01.809851  708397 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 21:38:01.870188  708397 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 21:38:01.860162188 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 21:38:01.870372  708397 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 21:38:01.870602  708397 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 21:38:01.873714  708397 out.go:179] * Using Docker driver with root privileges
	I1217 21:38:01.876756  708397 cni.go:84] Creating CNI manager for ""
	I1217 21:38:01.876845  708397 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 21:38:01.876859  708397 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 21:38:01.876948  708397 start.go:353] cluster config:
	{Name:force-systemd-flag-529066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:force-systemd-flag-529066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 21:38:01.880313  708397 out.go:179] * Starting "force-systemd-flag-529066" primary control-plane node in "force-systemd-flag-529066" cluster
	I1217 21:38:01.883251  708397 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 21:38:01.886417  708397 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 21:38:01.889327  708397 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 21:38:01.889381  708397 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1217 21:38:01.889384  708397 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 21:38:01.889391  708397 cache.go:65] Caching tarball of preloaded images
	I1217 21:38:01.889550  708397 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 21:38:01.889561  708397 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 21:38:01.889712  708397 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/config.json ...
	I1217 21:38:01.889753  708397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/config.json: {Name:mkb646a4b6f82a7a92fd7918af2e4fc4e27ff842 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 21:38:01.909709  708397 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 21:38:01.909732  708397 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 21:38:01.909751  708397 cache.go:243] Successfully downloaded all kic artifacts
	I1217 21:38:01.909779  708397 start.go:360] acquireMachinesLock for force-systemd-flag-529066: {Name:mk274256da0ae1c9919a949c2bf48b0a5f81b102 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 21:38:01.909910  708397 start.go:364] duration metric: took 98.323µs to acquireMachinesLock for "force-systemd-flag-529066"
	I1217 21:38:01.909942  708397 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-529066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:force-systemd-flag-529066 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 21:38:01.910051  708397 start.go:125] createHost starting for "" (driver="docker")
	I1217 21:38:01.913516  708397 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 21:38:01.913767  708397 start.go:159] libmachine.API.Create for "force-systemd-flag-529066" (driver="docker")
	I1217 21:38:01.913806  708397 client.go:173] LocalClient.Create starting
	I1217 21:38:01.913892  708397 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem
	I1217 21:38:01.913932  708397 main.go:143] libmachine: Decoding PEM data...
	I1217 21:38:01.913957  708397 main.go:143] libmachine: Parsing certificate...
	I1217 21:38:01.914016  708397 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem
	I1217 21:38:01.914039  708397 main.go:143] libmachine: Decoding PEM data...
	I1217 21:38:01.914055  708397 main.go:143] libmachine: Parsing certificate...
	I1217 21:38:01.914442  708397 cli_runner.go:164] Run: docker network inspect force-systemd-flag-529066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 21:38:01.931853  708397 cli_runner.go:211] docker network inspect force-systemd-flag-529066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 21:38:01.931957  708397 network_create.go:284] running [docker network inspect force-systemd-flag-529066] to gather additional debugging logs...
	I1217 21:38:01.931980  708397 cli_runner.go:164] Run: docker network inspect force-systemd-flag-529066
	W1217 21:38:01.948347  708397 cli_runner.go:211] docker network inspect force-systemd-flag-529066 returned with exit code 1
	I1217 21:38:01.948381  708397 network_create.go:287] error running [docker network inspect force-systemd-flag-529066]: docker network inspect force-systemd-flag-529066: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-529066 not found
	I1217 21:38:01.948399  708397 network_create.go:289] output of [docker network inspect force-systemd-flag-529066]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-529066 not found
	
	** /stderr **
	I1217 21:38:01.948500  708397 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 21:38:01.965607  708397 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-254979ff9069 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:ac:44:40:5e:f0} reservation:<nil>}
	I1217 21:38:01.965897  708397 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f50c6765c39b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:8e:a0:e4:11:17} reservation:<nil>}
	I1217 21:38:01.966144  708397 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5c5b31cd5961 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:2f:3e:ac:ea:b3} reservation:<nil>}
	I1217 21:38:01.966593  708397 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019fec70}
	I1217 21:38:01.966629  708397 network_create.go:124] attempt to create docker network force-systemd-flag-529066 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1217 21:38:01.966692  708397 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-529066 force-systemd-flag-529066
	I1217 21:38:02.029598  708397 network_create.go:108] docker network force-systemd-flag-529066 192.168.76.0/24 created
	I1217 21:38:02.029631  708397 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-529066" container
	I1217 21:38:02.029718  708397 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 21:38:02.046947  708397 cli_runner.go:164] Run: docker volume create force-systemd-flag-529066 --label name.minikube.sigs.k8s.io=force-systemd-flag-529066 --label created_by.minikube.sigs.k8s.io=true
	I1217 21:38:02.067652  708397 oci.go:103] Successfully created a docker volume force-systemd-flag-529066
	I1217 21:38:02.067736  708397 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-529066-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-529066 --entrypoint /usr/bin/test -v force-systemd-flag-529066:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 21:38:02.618463  708397 oci.go:107] Successfully prepared a docker volume force-systemd-flag-529066
	I1217 21:38:02.618529  708397 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 21:38:02.618542  708397 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 21:38:02.618615  708397 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-529066:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 21:38:06.710614  708397 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-529066:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.091956318s)
	I1217 21:38:06.710658  708397 kic.go:203] duration metric: took 4.092112143s to extract preloaded images to volume ...
	W1217 21:38:06.710817  708397 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 21:38:06.710936  708397 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 21:38:06.763976  708397 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-529066 --name force-systemd-flag-529066 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-529066 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-529066 --network force-systemd-flag-529066 --ip 192.168.76.2 --volume force-systemd-flag-529066:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 21:38:07.063564  708397 cli_runner.go:164] Run: docker container inspect force-systemd-flag-529066 --format={{.State.Running}}
	I1217 21:38:07.082077  708397 cli_runner.go:164] Run: docker container inspect force-systemd-flag-529066 --format={{.State.Status}}
	I1217 21:38:07.104722  708397 cli_runner.go:164] Run: docker exec force-systemd-flag-529066 stat /var/lib/dpkg/alternatives/iptables
	I1217 21:38:07.157678  708397 oci.go:144] the created container "force-systemd-flag-529066" has a running status.
	I1217 21:38:07.157705  708397 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/force-systemd-flag-529066/id_rsa...
	I1217 21:38:08.420633  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/force-systemd-flag-529066/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1217 21:38:08.420685  708397 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-485134/.minikube/machines/force-systemd-flag-529066/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 21:38:08.442775  708397 cli_runner.go:164] Run: docker container inspect force-systemd-flag-529066 --format={{.State.Status}}
	I1217 21:38:08.460072  708397 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 21:38:08.460105  708397 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-529066 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 21:38:08.505166  708397 cli_runner.go:164] Run: docker container inspect force-systemd-flag-529066 --format={{.State.Status}}
	I1217 21:38:08.524748  708397 machine.go:94] provisionDockerMachine start ...
	I1217 21:38:08.524857  708397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529066
	I1217 21:38:08.543805  708397 main.go:143] libmachine: Using SSH client type: native
	I1217 21:38:08.544171  708397 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1217 21:38:08.544188  708397 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 21:38:08.679380  708397 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-529066
	
	I1217 21:38:08.679455  708397 ubuntu.go:182] provisioning hostname "force-systemd-flag-529066"
	I1217 21:38:08.679547  708397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529066
	I1217 21:38:08.697922  708397 main.go:143] libmachine: Using SSH client type: native
	I1217 21:38:08.698244  708397 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1217 21:38:08.698262  708397 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-529066 && echo "force-systemd-flag-529066" | sudo tee /etc/hostname
	I1217 21:38:08.842298  708397 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-529066
	
	I1217 21:38:08.842378  708397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529066
	I1217 21:38:08.863274  708397 main.go:143] libmachine: Using SSH client type: native
	I1217 21:38:08.863633  708397 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1217 21:38:08.863659  708397 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-529066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-529066/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-529066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 21:38:08.996204  708397 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 21:38:08.996273  708397 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 21:38:08.996304  708397 ubuntu.go:190] setting up certificates
	I1217 21:38:08.996313  708397 provision.go:84] configureAuth start
	I1217 21:38:08.996374  708397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-529066
	I1217 21:38:09.016693  708397 provision.go:143] copyHostCerts
	I1217 21:38:09.016744  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 21:38:09.016777  708397 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 21:38:09.016790  708397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 21:38:09.016869  708397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 21:38:09.017140  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 21:38:09.017172  708397 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 21:38:09.017178  708397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 21:38:09.017216  708397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 21:38:09.017290  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 21:38:09.017307  708397 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 21:38:09.017312  708397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 21:38:09.017337  708397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 21:38:09.017389  708397 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-529066 san=[127.0.0.1 192.168.76.2 force-systemd-flag-529066 localhost minikube]
	I1217 21:38:09.354144  708397 provision.go:177] copyRemoteCerts
	I1217 21:38:09.354219  708397 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 21:38:09.354267  708397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529066
	I1217 21:38:09.371232  708397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/force-systemd-flag-529066/id_rsa Username:docker}
	I1217 21:38:09.467383  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 21:38:09.467442  708397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 21:38:09.483957  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 21:38:09.484016  708397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1217 21:38:09.500653  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 21:38:09.500721  708397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 21:38:09.518482  708397 provision.go:87] duration metric: took 522.153378ms to configureAuth
	I1217 21:38:09.518510  708397 ubuntu.go:206] setting minikube options for container-runtime
	I1217 21:38:09.518726  708397 config.go:182] Loaded profile config "force-systemd-flag-529066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 21:38:09.518839  708397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529066
	I1217 21:38:09.535452  708397 main.go:143] libmachine: Using SSH client type: native
	I1217 21:38:09.535816  708397 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1217 21:38:09.535836  708397 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 21:38:09.860097  708397 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 21:38:09.860121  708397 machine.go:97] duration metric: took 1.33535012s to provisionDockerMachine
	I1217 21:38:09.860133  708397 client.go:176] duration metric: took 7.946300404s to LocalClient.Create
	I1217 21:38:09.860155  708397 start.go:167] duration metric: took 7.946389455s to libmachine.API.Create "force-systemd-flag-529066"
	I1217 21:38:09.860163  708397 start.go:293] postStartSetup for "force-systemd-flag-529066" (driver="docker")
	I1217 21:38:09.860173  708397 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 21:38:09.860247  708397 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 21:38:09.860287  708397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529066
	I1217 21:38:09.877563  708397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/force-systemd-flag-529066/id_rsa Username:docker}
	I1217 21:38:09.971834  708397 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 21:38:09.975108  708397 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 21:38:09.975138  708397 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 21:38:09.975150  708397 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 21:38:09.975211  708397 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 21:38:09.975299  708397 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 21:38:09.975310  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /etc/ssl/certs/4884122.pem
	I1217 21:38:09.975411  708397 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 21:38:09.982931  708397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 21:38:10.006608  708397 start.go:296] duration metric: took 146.403832ms for postStartSetup
	I1217 21:38:10.007092  708397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-529066
	I1217 21:38:10.025873  708397 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/config.json ...
	I1217 21:38:10.026186  708397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 21:38:10.026251  708397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529066
	I1217 21:38:10.044076  708397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/force-systemd-flag-529066/id_rsa Username:docker}
	I1217 21:38:10.136768  708397 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 21:38:10.141372  708397 start.go:128] duration metric: took 8.231304935s to createHost
	I1217 21:38:10.141401  708397 start.go:83] releasing machines lock for "force-systemd-flag-529066", held for 8.231470393s
	I1217 21:38:10.141474  708397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-529066
	I1217 21:38:10.158354  708397 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 21:38:10.158412  708397 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 21:38:10.158427  708397 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 21:38:10.158454  708397 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 21:38:10.158479  708397 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 21:38:10.158504  708397 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 21:38:10.158554  708397 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 21:38:10.158587  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 21:38:10.158599  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 21:38:10.158612  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:38:10.158629  708397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 21:38:10.158680  708397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-529066
	I1217 21:38:10.175822  708397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/force-systemd-flag-529066/id_rsa Username:docker}
	I1217 21:38:10.281576  708397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 21:38:10.299844  708397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 21:38:10.318333  708397 ssh_runner.go:195] Run: openssl version
	I1217 21:38:10.325197  708397 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 21:38:10.333812  708397 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 21:38:10.341899  708397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 21:38:10.346182  708397 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 21:38:10.346248  708397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 21:38:10.388129  708397 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 21:38:10.395620  708397 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/488412.pem /etc/ssl/certs/51391683.0
	I1217 21:38:10.402822  708397 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 21:38:10.410541  708397 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 21:38:10.418138  708397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 21:38:10.422123  708397 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 21:38:10.422222  708397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 21:38:10.463841  708397 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 21:38:10.471310  708397 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4884122.pem /etc/ssl/certs/3ec20f2e.0
	I1217 21:38:10.478662  708397 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:38:10.486093  708397 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 21:38:10.493379  708397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:38:10.497008  708397 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:38:10.497131  708397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:38:10.537955  708397 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 21:38:10.545363  708397 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 21:38:10.552565  708397 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 21:38:10.555905  708397 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 21:38:10.559260  708397 ssh_runner.go:195] Run: cat /version.json
	I1217 21:38:10.559387  708397 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 21:38:10.563624  708397 ssh_runner.go:195] Run: systemctl --version
	I1217 21:38:10.664593  708397 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 21:38:10.700420  708397 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 21:38:10.704571  708397 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 21:38:10.704691  708397 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 21:38:10.732892  708397 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 21:38:10.732917  708397 start.go:496] detecting cgroup driver to use...
	I1217 21:38:10.732930  708397 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1217 21:38:10.733005  708397 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 21:38:10.751096  708397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 21:38:10.764146  708397 docker.go:218] disabling cri-docker service (if available) ...
	I1217 21:38:10.764261  708397 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 21:38:10.782260  708397 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 21:38:10.800947  708397 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 21:38:10.916398  708397 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 21:38:11.034338  708397 docker.go:234] disabling docker service ...
	I1217 21:38:11.034440  708397 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 21:38:11.055242  708397 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 21:38:11.070903  708397 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 21:38:11.214302  708397 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 21:38:11.340423  708397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 21:38:11.353562  708397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 21:38:11.367566  708397 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 21:38:11.367666  708397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:38:11.376741  708397 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 21:38:11.376867  708397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:38:11.385867  708397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:38:11.394697  708397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:38:11.404522  708397 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 21:38:11.412769  708397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:38:11.421279  708397 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:38:11.435216  708397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:38:11.444712  708397 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 21:38:11.452060  708397 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 21:38:11.459408  708397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 21:38:11.586250  708397 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 21:38:11.758696  708397 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 21:38:11.758791  708397 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 21:38:11.762463  708397 start.go:564] Will wait 60s for crictl version
	I1217 21:38:11.762575  708397 ssh_runner.go:195] Run: which crictl
	I1217 21:38:11.765945  708397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 21:38:11.789983  708397 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 21:38:11.790107  708397 ssh_runner.go:195] Run: crio --version
	I1217 21:38:11.818835  708397 ssh_runner.go:195] Run: crio --version
	I1217 21:38:11.861069  708397 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 21:38:11.864063  708397 cli_runner.go:164] Run: docker network inspect force-systemd-flag-529066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 21:38:11.887417  708397 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 21:38:11.891199  708397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 21:38:11.900945  708397 kubeadm.go:884] updating cluster {Name:force-systemd-flag-529066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:force-systemd-flag-529066 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 21:38:11.901074  708397 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 21:38:11.901135  708397 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 21:38:11.937625  708397 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 21:38:11.937651  708397 crio.go:433] Images already preloaded, skipping extraction
	I1217 21:38:11.937706  708397 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 21:38:11.962880  708397 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 21:38:11.962902  708397 cache_images.go:86] Images are preloaded, skipping loading
	I1217 21:38:11.962909  708397 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 crio true true} ...
	I1217 21:38:11.962994  708397 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-529066 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:force-systemd-flag-529066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 21:38:11.963071  708397 ssh_runner.go:195] Run: crio config
	I1217 21:38:12.018278  708397 cni.go:84] Creating CNI manager for ""
	I1217 21:38:12.018355  708397 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 21:38:12.018379  708397 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 21:38:12.018431  708397 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-529066 NodeName:force-systemd-flag-529066 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 21:38:12.018675  708397 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-529066"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 21:38:12.018818  708397 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 21:38:12.026638  708397 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 21:38:12.026735  708397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 21:38:12.034494  708397 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1217 21:38:12.047463  708397 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 21:38:12.061419  708397 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1217 21:38:12.076240  708397 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 21:38:12.080092  708397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 21:38:12.089685  708397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 21:38:12.213211  708397 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 21:38:12.230422  708397 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066 for IP: 192.168.76.2
	I1217 21:38:12.230440  708397 certs.go:195] generating shared ca certs ...
	I1217 21:38:12.230457  708397 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 21:38:12.230609  708397 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 21:38:12.230661  708397 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 21:38:12.230673  708397 certs.go:257] generating profile certs ...
	I1217 21:38:12.230731  708397 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/client.key
	I1217 21:38:12.230747  708397 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/client.crt with IP's: []
	I1217 21:38:12.435903  708397 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/client.crt ...
	I1217 21:38:12.435937  708397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/client.crt: {Name:mk08cf7a5ca48a71899fe6fc76af885df8a5231f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 21:38:12.436145  708397 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/client.key ...
	I1217 21:38:12.436163  708397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/client.key: {Name:mk3607f4680bd7b8a7f9c633c83859c8ad330988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 21:38:12.436265  708397 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/apiserver.key.a2b1e6d7
	I1217 21:38:12.436286  708397 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/apiserver.crt.a2b1e6d7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 21:38:12.990261  708397 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/apiserver.crt.a2b1e6d7 ...
	I1217 21:38:12.990294  708397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/apiserver.crt.a2b1e6d7: {Name:mk768e705cd8898fb4c87a72579097f020ba8b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 21:38:12.990478  708397 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/apiserver.key.a2b1e6d7 ...
	I1217 21:38:12.990493  708397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/apiserver.key.a2b1e6d7: {Name:mk8eb24dd0b8350bae029ced282e5b0ef95bd4c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 21:38:12.990581  708397 certs.go:382] copying /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/apiserver.crt.a2b1e6d7 -> /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/apiserver.crt
	I1217 21:38:12.990674  708397 certs.go:386] copying /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/apiserver.key.a2b1e6d7 -> /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/apiserver.key
	I1217 21:38:12.990735  708397 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/proxy-client.key
	I1217 21:38:12.990754  708397 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/proxy-client.crt with IP's: []
	I1217 21:38:13.143450  708397 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/proxy-client.crt ...
	I1217 21:38:13.143482  708397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/proxy-client.crt: {Name:mkf2c155cdf7034a5516d3483f68748e75907e88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 21:38:13.143690  708397 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/proxy-client.key ...
	I1217 21:38:13.143706  708397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/proxy-client.key: {Name:mk0d26453bf405124164b23846eee75eb655895c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 21:38:13.143805  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 21:38:13.143832  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 21:38:13.143848  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 21:38:13.143868  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 21:38:13.143883  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 21:38:13.143901  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 21:38:13.143913  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 21:38:13.143927  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 21:38:13.143975  708397 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 21:38:13.144015  708397 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 21:38:13.144028  708397 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 21:38:13.144053  708397 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 21:38:13.144083  708397 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 21:38:13.144116  708397 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 21:38:13.144165  708397 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 21:38:13.144199  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem -> /usr/share/ca-certificates/488412.pem
	I1217 21:38:13.144216  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> /usr/share/ca-certificates/4884122.pem
	I1217 21:38:13.144227  708397 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:38:13.144812  708397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 21:38:13.163355  708397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 21:38:13.181538  708397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 21:38:13.200935  708397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 21:38:13.218520  708397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1217 21:38:13.235834  708397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 21:38:13.253853  708397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 21:38:13.274165  708397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/force-systemd-flag-529066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 21:38:13.294402  708397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 21:38:13.312647  708397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 21:38:13.339178  708397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 21:38:13.364294  708397 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 21:38:13.380151  708397 ssh_runner.go:195] Run: openssl version
	I1217 21:38:13.389012  708397 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:38:13.397007  708397 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 21:38:13.404454  708397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:38:13.408473  708397 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:38:13.408540  708397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:38:13.449819  708397 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 21:38:13.457870  708397 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 21:38:13.465376  708397 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 21:38:13.473275  708397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 21:38:13.477361  708397 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 21:38:13.477429  708397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 21:38:13.520490  708397 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 21:38:13.528181  708397 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 21:38:13.535743  708397 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 21:38:13.543686  708397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 21:38:13.547631  708397 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 21:38:13.547700  708397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 21:38:13.589082  708397 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 21:38:13.596695  708397 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 21:38:13.600366  708397 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 21:38:13.600422  708397 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-529066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:force-systemd-flag-529066 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 21:38:13.600490  708397 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 21:38:13.600547  708397 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 21:38:13.627329  708397 cri.go:89] found id: ""
	I1217 21:38:13.627401  708397 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 21:38:13.635289  708397 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 21:38:13.643023  708397 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 21:38:13.643121  708397 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 21:38:13.650741  708397 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 21:38:13.650760  708397 kubeadm.go:158] found existing configuration files:
	
	I1217 21:38:13.650814  708397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 21:38:13.658498  708397 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 21:38:13.658575  708397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 21:38:13.665971  708397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 21:38:13.673653  708397 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 21:38:13.673749  708397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 21:38:13.681604  708397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 21:38:13.689385  708397 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 21:38:13.689459  708397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 21:38:13.696647  708397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 21:38:13.704672  708397 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 21:38:13.704766  708397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 21:38:13.712575  708397 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 21:38:13.752361  708397 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 21:38:13.752424  708397 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 21:38:13.784507  708397 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 21:38:13.784585  708397 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 21:38:13.784625  708397 kubeadm.go:319] OS: Linux
	I1217 21:38:13.784681  708397 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 21:38:13.784735  708397 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 21:38:13.784785  708397 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 21:38:13.784837  708397 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 21:38:13.784888  708397 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 21:38:13.784940  708397 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 21:38:13.784989  708397 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 21:38:13.785041  708397 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 21:38:13.785090  708397 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 21:38:13.862467  708397 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 21:38:13.862587  708397 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 21:38:13.862682  708397 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 21:38:13.873566  708397 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 21:38:13.879972  708397 out.go:252]   - Generating certificates and keys ...
	I1217 21:38:13.880097  708397 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 21:38:13.880185  708397 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 21:38:14.210466  708397 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 21:38:14.703353  708397 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 21:38:15.192159  708397 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 21:38:16.250147  708397 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 21:38:17.252262  708397 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 21:38:17.252639  708397 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-529066 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 21:38:17.604291  708397 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 21:38:17.604583  708397 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-529066 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 21:38:19.139712  708397 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 21:38:19.548801  708397 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 21:38:20.292742  708397 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 21:38:20.293032  708397 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 21:38:20.673819  708397 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 21:38:21.192781  708397 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 21:38:21.739697  708397 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 21:38:22.742241  708397 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 21:38:23.827593  708397 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 21:38:23.827701  708397 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 21:38:23.827778  708397 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 21:38:24.443465  666795 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001439744s
	I1217 21:38:24.449936  666795 kubeadm.go:319] 
	I1217 21:38:24.450031  666795 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 21:38:24.450070  666795 kubeadm.go:319] 	- The kubelet is not running
	I1217 21:38:24.450202  666795 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 21:38:24.450208  666795 kubeadm.go:319] 
	I1217 21:38:24.450357  666795 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 21:38:24.450409  666795 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 21:38:24.450443  666795 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 21:38:24.450447  666795 kubeadm.go:319] 
	I1217 21:38:24.459286  666795 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 21:38:24.460014  666795 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 21:38:24.460341  666795 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 21:38:24.460648  666795 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 21:38:24.460655  666795 kubeadm.go:319] 
	I1217 21:38:24.460818  666795 kubeadm.go:403] duration metric: took 12m10.301025728s to StartCluster
	I1217 21:38:24.460989  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 21:38:24.461070  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 21:38:24.461154  666795 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 21:38:24.493756  666795 cri.go:89] found id: ""
	I1217 21:38:24.493798  666795 logs.go:282] 0 containers: []
	W1217 21:38:24.493808  666795 logs.go:284] No container was found matching "kube-apiserver"
	I1217 21:38:24.493814  666795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 21:38:24.493886  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 21:38:24.521493  666795 cri.go:89] found id: ""
	I1217 21:38:24.521524  666795 logs.go:282] 0 containers: []
	W1217 21:38:24.521543  666795 logs.go:284] No container was found matching "etcd"
	I1217 21:38:24.521549  666795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 21:38:24.521635  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 21:38:24.554181  666795 cri.go:89] found id: ""
	I1217 21:38:24.554208  666795 logs.go:282] 0 containers: []
	W1217 21:38:24.554217  666795 logs.go:284] No container was found matching "coredns"
	I1217 21:38:24.554228  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 21:38:24.554286  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 21:38:24.605864  666795 cri.go:89] found id: ""
	I1217 21:38:24.605890  666795 logs.go:282] 0 containers: []
	W1217 21:38:24.605899  666795 logs.go:284] No container was found matching "kube-scheduler"
	I1217 21:38:24.605905  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 21:38:24.605966  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 21:38:24.684707  666795 cri.go:89] found id: ""
	I1217 21:38:24.684734  666795 logs.go:282] 0 containers: []
	W1217 21:38:24.684743  666795 logs.go:284] No container was found matching "kube-proxy"
	I1217 21:38:24.684749  666795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 21:38:24.684810  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 21:38:24.736800  666795 cri.go:89] found id: ""
	I1217 21:38:24.736880  666795 logs.go:282] 0 containers: []
	W1217 21:38:24.736903  666795 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 21:38:24.736922  666795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 21:38:24.737008  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 21:38:24.768479  666795 cri.go:89] found id: ""
	I1217 21:38:24.768582  666795 logs.go:282] 0 containers: []
	W1217 21:38:24.768611  666795 logs.go:284] No container was found matching "kindnet"
	I1217 21:38:24.768656  666795 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 21:38:24.768790  666795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 21:38:24.805973  666795 cri.go:89] found id: ""
	I1217 21:38:24.806054  666795 logs.go:282] 0 containers: []
	W1217 21:38:24.806077  666795 logs.go:284] No container was found matching "storage-provisioner"
	I1217 21:38:24.806106  666795 logs.go:123] Gathering logs for dmesg ...
	I1217 21:38:24.806161  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 21:38:24.824207  666795 logs.go:123] Gathering logs for describe nodes ...
	I1217 21:38:24.824293  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 21:38:24.911297  666795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 21:38:24.911374  666795 logs.go:123] Gathering logs for CRI-O ...
	I1217 21:38:24.911412  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 21:38:24.949613  666795 logs.go:123] Gathering logs for container status ...
	I1217 21:38:24.949710  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 21:38:24.990285  666795 logs.go:123] Gathering logs for kubelet ...
	I1217 21:38:24.990366  666795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 21:38:25.077352  666795 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001439744s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 21:38:25.077470  666795 out.go:285] * 
	W1217 21:38:25.077694  666795 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001439744s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 21:38:25.077757  666795 out.go:285] * 
	W1217 21:38:25.080228  666795 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 21:38:25.090819  666795 out.go:203] 
	W1217 21:38:25.094741  666795 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001439744s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 21:38:25.094814  666795 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 21:38:25.094836  666795 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 21:38:25.098897  666795 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 21:26:08 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:26:08.836750207Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 21:26:08 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:26:08.836784898Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 21:26:08 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:26:08.836828148Z" level=info msg="Create NRI interface"
	Dec 17 21:26:08 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:26:08.837183128Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 21:26:08 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:26:08.837211575Z" level=info msg="runtime interface created"
	Dec 17 21:26:08 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:26:08.837227305Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 21:26:08 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:26:08.837296482Z" level=info msg="runtime interface starting up..."
	Dec 17 21:26:08 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:26:08.837307387Z" level=info msg="starting plugins..."
	Dec 17 21:26:08 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:26:08.837324569Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 21:26:08 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:26:08.837397718Z" level=info msg="No systemd watchdog enabled"
	Dec 17 21:26:08 kubernetes-upgrade-342357 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 17 21:30:20 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:30:20.565090129Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=ee2bc897-de2c-4254-9751-6c8fd3c1d615 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 21:30:20 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:30:20.574504683Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=7fa82cfb-8a85-41fc-9a2a-77a3ac21e67a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 21:30:20 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:30:20.575227396Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=14a06fdf-8611-4725-8bfb-0a5b11855b5a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 21:30:20 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:30:20.575899803Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=2158c4cf-b3c5-42c3-9493-c284677649d3 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 21:30:20 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:30:20.576437981Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=13fde99c-285c-4567-9544-3dab84b21ada name=/runtime.v1.ImageService/ImageStatus
	Dec 17 21:30:20 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:30:20.576998813Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=c09a0ee3-0fa3-4864-8427-9a3486af0413 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 21:30:20 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:30:20.577552154Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=cd11a1df-4115-4970-a185-3ffe3ec630e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 21:34:22 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:34:22.971771938Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=747b3b9a-a945-4008-9551-c625d16beea3 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 21:34:22 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:34:22.972494816Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=25479022-e1bd-42d2-bd4d-0d19538dfdc2 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 21:34:22 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:34:22.973180089Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=1e4e5cde-2aca-4733-87d0-917090cb54b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 21:34:22 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:34:22.973657032Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=acced827-9dd8-4af2-9bf7-f811b979ce25 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 21:34:22 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:34:22.974118278Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=b6f86102-5d51-4daf-bd98-e0574e98aa26 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 21:34:22 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:34:22.97555344Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=ceaab2f0-d88f-4097-bf3b-259564450e49 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 21:34:22 kubernetes-upgrade-342357 crio[653]: time="2025-12-17T21:34:22.976154101Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=9ec81e7d-5f5d-491e-9019-d3ef04a12934 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 21:00] overlayfs: idmapped layers are currently not supported
	[Dec17 21:04] overlayfs: idmapped layers are currently not supported
	[  +3.938873] overlayfs: idmapped layers are currently not supported
	[Dec17 21:05] overlayfs: idmapped layers are currently not supported
	[Dec17 21:06] overlayfs: idmapped layers are currently not supported
	[Dec17 21:08] overlayfs: idmapped layers are currently not supported
	[Dec17 21:12] overlayfs: idmapped layers are currently not supported
	[Dec17 21:13] overlayfs: idmapped layers are currently not supported
	[Dec17 21:14] overlayfs: idmapped layers are currently not supported
	[ +43.653071] overlayfs: idmapped layers are currently not supported
	[Dec17 21:15] overlayfs: idmapped layers are currently not supported
	[Dec17 21:16] overlayfs: idmapped layers are currently not supported
	[Dec17 21:17] overlayfs: idmapped layers are currently not supported
	[  +0.555481] overlayfs: idmapped layers are currently not supported
	[Dec17 21:18] overlayfs: idmapped layers are currently not supported
	[ +18.618704] overlayfs: idmapped layers are currently not supported
	[Dec17 21:19] overlayfs: idmapped layers are currently not supported
	[ +26.163757] overlayfs: idmapped layers are currently not supported
	[Dec17 21:20] overlayfs: idmapped layers are currently not supported
	[Dec17 21:21] kauditd_printk_skb: 8 callbacks suppressed
	[ +22.921341] overlayfs: idmapped layers are currently not supported
	[Dec17 21:24] overlayfs: idmapped layers are currently not supported
	[Dec17 21:25] overlayfs: idmapped layers are currently not supported
	[Dec17 21:36] overlayfs: idmapped layers are currently not supported
	[Dec17 21:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:38:27 up  4:20,  0 user,  load average: 2.27, 1.63, 1.77
	Linux kubernetes-upgrade-342357 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 21:38:24 kubernetes-upgrade-342357 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 21:38:25 kubernetes-upgrade-342357 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 17 21:38:25 kubernetes-upgrade-342357 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 21:38:25 kubernetes-upgrade-342357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 21:38:25 kubernetes-upgrade-342357 kubelet[12402]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 21:38:25 kubernetes-upgrade-342357 kubelet[12402]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 21:38:25 kubernetes-upgrade-342357 kubelet[12402]: E1217 21:38:25.383261   12402 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 21:38:25 kubernetes-upgrade-342357 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 21:38:25 kubernetes-upgrade-342357 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 21:38:26 kubernetes-upgrade-342357 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 964.
	Dec 17 21:38:26 kubernetes-upgrade-342357 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 21:38:26 kubernetes-upgrade-342357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 21:38:26 kubernetes-upgrade-342357 kubelet[12415]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 21:38:26 kubernetes-upgrade-342357 kubelet[12415]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 21:38:26 kubernetes-upgrade-342357 kubelet[12415]: E1217 21:38:26.234553   12415 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 21:38:26 kubernetes-upgrade-342357 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 21:38:26 kubernetes-upgrade-342357 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 21:38:27 kubernetes-upgrade-342357 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 965.
	Dec 17 21:38:27 kubernetes-upgrade-342357 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 21:38:27 kubernetes-upgrade-342357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 21:38:27 kubernetes-upgrade-342357 kubelet[12495]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 21:38:27 kubernetes-upgrade-342357 kubelet[12495]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 21:38:27 kubernetes-upgrade-342357 kubelet[12495]: E1217 21:38:27.168645   12495 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 21:38:27 kubernetes-upgrade-342357 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 21:38:27 kubernetes-upgrade-342357 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-342357 -n kubernetes-upgrade-342357
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-342357 -n kubernetes-upgrade-342357: exit status 2 (585.165006ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-342357" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-342357" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-342357
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-342357: (2.784993284s)
--- FAIL: TestKubernetesUpgrade (785.71s)

                                                
                                    
x
+
TestPause/serial/Pause (6.27s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-918446 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-918446 --alsologtostderr -v=5: exit status 80 (1.732154007s)

                                                
                                                
-- stdout --
	* Pausing node pause-918446 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 21:37:52.983691  706998 out.go:360] Setting OutFile to fd 1 ...
	I1217 21:37:52.984516  706998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:37:52.984530  706998 out.go:374] Setting ErrFile to fd 2...
	I1217 21:37:52.984536  706998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:37:52.984801  706998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 21:37:52.985076  706998 out.go:368] Setting JSON to false
	I1217 21:37:52.985100  706998 mustload.go:66] Loading cluster: pause-918446
	I1217 21:37:52.985555  706998 config.go:182] Loaded profile config "pause-918446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 21:37:52.986124  706998 cli_runner.go:164] Run: docker container inspect pause-918446 --format={{.State.Status}}
	I1217 21:37:53.004331  706998 host.go:66] Checking if "pause-918446" exists ...
	I1217 21:37:53.004665  706998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 21:37:53.064985  706998 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-17 21:37:53.05591493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 21:37:53.065613  706998 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-918446 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 21:37:53.069131  706998 out.go:179] * Pausing node pause-918446 ... 
	I1217 21:37:53.074640  706998 host.go:66] Checking if "pause-918446" exists ...
	I1217 21:37:53.074990  706998 ssh_runner.go:195] Run: systemctl --version
	I1217 21:37:53.075033  706998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918446
	I1217 21:37:53.101577  706998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/pause-918446/id_rsa Username:docker}
	I1217 21:37:53.198514  706998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 21:37:53.212321  706998 pause.go:52] kubelet running: true
	I1217 21:37:53.212406  706998 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 21:37:53.426968  706998 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 21:37:53.427093  706998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 21:37:53.492041  706998 cri.go:89] found id: "715a5b22a17b4ea89d5c5df3718aaa8b543f9ef2d94f5b5ca36d804ac9cbc1e2"
	I1217 21:37:53.492065  706998 cri.go:89] found id: "13b62765e940b707f2109f867ff607b28ee2b00bc649622df13dd82104b94781"
	I1217 21:37:53.492071  706998 cri.go:89] found id: "2a5ce624f1a768e61331b170f438d08748328a6bad10cd0836782475886d0f78"
	I1217 21:37:53.492075  706998 cri.go:89] found id: "730d2f116ff8547a48b3b177ef0205a4285f5b1a2d27d3af70f6c52dce002c86"
	I1217 21:37:53.492078  706998 cri.go:89] found id: "8a445923145e34da32a48263e4db5c4034994c139614d9318bf0059d4f765b78"
	I1217 21:37:53.492082  706998 cri.go:89] found id: "dea8eca7161c61cb894081fc18ba1333ebfce11ca17673be95f8ae09baa586f7"
	I1217 21:37:53.492085  706998 cri.go:89] found id: "1cca7e7499270d5a13c177bfb97573b991962b599df6a6ce2260aed393abbb0d"
	I1217 21:37:53.492088  706998 cri.go:89] found id: "435b580572a8bd0462449f84c02d6482d96d57145cf44d953cdbce897db99730"
	I1217 21:37:53.492091  706998 cri.go:89] found id: "1eb61928f5d77c91d1c42c08faf26efa6f642b7bbeb923ce2dd2d46594c88b3d"
	I1217 21:37:53.492097  706998 cri.go:89] found id: "eecdfd5180ffaad58834ff194e5867c5f6eabf3b2f43f4e5c424692c8376e31c"
	I1217 21:37:53.492101  706998 cri.go:89] found id: "776d553a4c8267d516c1d7cd7f0f211d2fd8a6fbb10a89792e1dab3050e69a60"
	I1217 21:37:53.492104  706998 cri.go:89] found id: "e54d9f4786a1252ba2f1afa2ea94a40c28d6ddec95c5afb31c0369d80e532d7f"
	I1217 21:37:53.492107  706998 cri.go:89] found id: "d2cb624370de704a2f2fb8a42c2d5a2a132c9c0f57353890ebc7ffa8d923605a"
	I1217 21:37:53.492111  706998 cri.go:89] found id: "dde6c95ebd91853deb5bebbd95070104e1b043bdf297eae1114f0db44dd281ab"
	I1217 21:37:53.492114  706998 cri.go:89] found id: ""
	I1217 21:37:53.492167  706998 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 21:37:53.503485  706998 retry.go:31] will retry after 143.397326ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T21:37:53Z" level=error msg="open /run/runc: no such file or directory"
	I1217 21:37:53.648047  706998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 21:37:53.660744  706998 pause.go:52] kubelet running: false
	I1217 21:37:53.660812  706998 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 21:37:53.851718  706998 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 21:37:53.851863  706998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 21:37:53.945624  706998 cri.go:89] found id: "715a5b22a17b4ea89d5c5df3718aaa8b543f9ef2d94f5b5ca36d804ac9cbc1e2"
	I1217 21:37:53.945647  706998 cri.go:89] found id: "13b62765e940b707f2109f867ff607b28ee2b00bc649622df13dd82104b94781"
	I1217 21:37:53.945653  706998 cri.go:89] found id: "2a5ce624f1a768e61331b170f438d08748328a6bad10cd0836782475886d0f78"
	I1217 21:37:53.945656  706998 cri.go:89] found id: "730d2f116ff8547a48b3b177ef0205a4285f5b1a2d27d3af70f6c52dce002c86"
	I1217 21:37:53.945660  706998 cri.go:89] found id: "8a445923145e34da32a48263e4db5c4034994c139614d9318bf0059d4f765b78"
	I1217 21:37:53.945663  706998 cri.go:89] found id: "dea8eca7161c61cb894081fc18ba1333ebfce11ca17673be95f8ae09baa586f7"
	I1217 21:37:53.945666  706998 cri.go:89] found id: "1cca7e7499270d5a13c177bfb97573b991962b599df6a6ce2260aed393abbb0d"
	I1217 21:37:53.945670  706998 cri.go:89] found id: "435b580572a8bd0462449f84c02d6482d96d57145cf44d953cdbce897db99730"
	I1217 21:37:53.945673  706998 cri.go:89] found id: "1eb61928f5d77c91d1c42c08faf26efa6f642b7bbeb923ce2dd2d46594c88b3d"
	I1217 21:37:53.945685  706998 cri.go:89] found id: "eecdfd5180ffaad58834ff194e5867c5f6eabf3b2f43f4e5c424692c8376e31c"
	I1217 21:37:53.945689  706998 cri.go:89] found id: "776d553a4c8267d516c1d7cd7f0f211d2fd8a6fbb10a89792e1dab3050e69a60"
	I1217 21:37:53.945692  706998 cri.go:89] found id: "e54d9f4786a1252ba2f1afa2ea94a40c28d6ddec95c5afb31c0369d80e532d7f"
	I1217 21:37:53.945700  706998 cri.go:89] found id: "d2cb624370de704a2f2fb8a42c2d5a2a132c9c0f57353890ebc7ffa8d923605a"
	I1217 21:37:53.945704  706998 cri.go:89] found id: "dde6c95ebd91853deb5bebbd95070104e1b043bdf297eae1114f0db44dd281ab"
	I1217 21:37:53.945707  706998 cri.go:89] found id: ""
	I1217 21:37:53.945758  706998 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 21:37:53.957546  706998 retry.go:31] will retry after 405.531898ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T21:37:53Z" level=error msg="open /run/runc: no such file or directory"
	I1217 21:37:54.364239  706998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 21:37:54.377361  706998 pause.go:52] kubelet running: false
	I1217 21:37:54.377470  706998 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 21:37:54.523195  706998 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 21:37:54.523273  706998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 21:37:54.609125  706998 cri.go:89] found id: "715a5b22a17b4ea89d5c5df3718aaa8b543f9ef2d94f5b5ca36d804ac9cbc1e2"
	I1217 21:37:54.609213  706998 cri.go:89] found id: "13b62765e940b707f2109f867ff607b28ee2b00bc649622df13dd82104b94781"
	I1217 21:37:54.609248  706998 cri.go:89] found id: "2a5ce624f1a768e61331b170f438d08748328a6bad10cd0836782475886d0f78"
	I1217 21:37:54.609282  706998 cri.go:89] found id: "730d2f116ff8547a48b3b177ef0205a4285f5b1a2d27d3af70f6c52dce002c86"
	I1217 21:37:54.609315  706998 cri.go:89] found id: "8a445923145e34da32a48263e4db5c4034994c139614d9318bf0059d4f765b78"
	I1217 21:37:54.609374  706998 cri.go:89] found id: "dea8eca7161c61cb894081fc18ba1333ebfce11ca17673be95f8ae09baa586f7"
	I1217 21:37:54.609403  706998 cri.go:89] found id: "1cca7e7499270d5a13c177bfb97573b991962b599df6a6ce2260aed393abbb0d"
	I1217 21:37:54.609443  706998 cri.go:89] found id: "435b580572a8bd0462449f84c02d6482d96d57145cf44d953cdbce897db99730"
	I1217 21:37:54.609477  706998 cri.go:89] found id: "1eb61928f5d77c91d1c42c08faf26efa6f642b7bbeb923ce2dd2d46594c88b3d"
	I1217 21:37:54.609521  706998 cri.go:89] found id: "eecdfd5180ffaad58834ff194e5867c5f6eabf3b2f43f4e5c424692c8376e31c"
	I1217 21:37:54.609557  706998 cri.go:89] found id: "776d553a4c8267d516c1d7cd7f0f211d2fd8a6fbb10a89792e1dab3050e69a60"
	I1217 21:37:54.609587  706998 cri.go:89] found id: "e54d9f4786a1252ba2f1afa2ea94a40c28d6ddec95c5afb31c0369d80e532d7f"
	I1217 21:37:54.609608  706998 cri.go:89] found id: "d2cb624370de704a2f2fb8a42c2d5a2a132c9c0f57353890ebc7ffa8d923605a"
	I1217 21:37:54.609650  706998 cri.go:89] found id: "dde6c95ebd91853deb5bebbd95070104e1b043bdf297eae1114f0db44dd281ab"
	I1217 21:37:54.609673  706998 cri.go:89] found id: ""
	I1217 21:37:54.609809  706998 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 21:37:54.629344  706998 out.go:203] 
	W1217 21:37:54.632474  706998 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T21:37:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T21:37:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 21:37:54.632698  706998 out.go:285] * 
	* 
	W1217 21:37:54.659369  706998 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 21:37:54.662282  706998 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-918446 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-918446
helpers_test.go:244: (dbg) docker inspect pause-918446:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "766e98a0de82b6ee95015f80f40b90eccc0fc14978602887b22236fb87279ff4",
	        "Created": "2025-12-17T21:36:38.459481929Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 703251,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T21:36:38.515561948Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/766e98a0de82b6ee95015f80f40b90eccc0fc14978602887b22236fb87279ff4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/766e98a0de82b6ee95015f80f40b90eccc0fc14978602887b22236fb87279ff4/hostname",
	        "HostsPath": "/var/lib/docker/containers/766e98a0de82b6ee95015f80f40b90eccc0fc14978602887b22236fb87279ff4/hosts",
	        "LogPath": "/var/lib/docker/containers/766e98a0de82b6ee95015f80f40b90eccc0fc14978602887b22236fb87279ff4/766e98a0de82b6ee95015f80f40b90eccc0fc14978602887b22236fb87279ff4-json.log",
	        "Name": "/pause-918446",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-918446:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-918446",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "766e98a0de82b6ee95015f80f40b90eccc0fc14978602887b22236fb87279ff4",
	                "LowerDir": "/var/lib/docker/overlay2/21d0801c712512a71a24af57a9cd384b37544cc71bd3d0a0af70614ea3ee6eb9-init/diff:/var/lib/docker/overlay2/c4759b344ecb109c83d66e4ef56c76903f1d1e597efb86ab2d2c7911b1130a8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/21d0801c712512a71a24af57a9cd384b37544cc71bd3d0a0af70614ea3ee6eb9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/21d0801c712512a71a24af57a9cd384b37544cc71bd3d0a0af70614ea3ee6eb9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/21d0801c712512a71a24af57a9cd384b37544cc71bd3d0a0af70614ea3ee6eb9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-918446",
	                "Source": "/var/lib/docker/volumes/pause-918446/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-918446",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-918446",
	                "name.minikube.sigs.k8s.io": "pause-918446",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5c8bc8db657feaef3c1118ad893bc11b4ae842bc52df16353fc04d5dc0d8dc83",
	            "SandboxKey": "/var/run/docker/netns/5c8bc8db657f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-918446": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:8e:0b:f4:ad:d9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0f7dc9fae7b8c69416a0b9cbb4dd403c9e2ca80554239262bec9161eb4c54a52",
	                    "EndpointID": "4653020f5eac087a210f723e63a3efecacb6a6258a70ad13d496c39e757e67cd",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-918446",
	                        "766e98a0de82"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-918446 -n pause-918446
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-918446 -n pause-918446: exit status 2 (368.592101ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-918446 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-918446 logs -n 25: (1.399708369s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-185508 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                         │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:23 UTC │ 17 Dec 25 21:24 UTC │
	│ start   │ -p missing-upgrade-783783 --memory=3072 --driver=docker  --container-runtime=crio                                                             │ missing-upgrade-783783    │ jenkins │ v1.35.0 │ 17 Dec 25 21:23 UTC │ 17 Dec 25 21:24 UTC │
	│ start   │ -p NoKubernetes-185508 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                         │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:24 UTC │ 17 Dec 25 21:26 UTC │
	│ start   │ -p missing-upgrade-783783 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ missing-upgrade-783783    │ jenkins │ v1.37.0 │ 17 Dec 25 21:24 UTC │ 17 Dec 25 21:25 UTC │
	│ delete  │ -p missing-upgrade-783783                                                                                                                     │ missing-upgrade-783783    │ jenkins │ v1.37.0 │ 17 Dec 25 21:25 UTC │ 17 Dec 25 21:25 UTC │
	│ start   │ -p kubernetes-upgrade-342357 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio      │ kubernetes-upgrade-342357 │ jenkins │ v1.37.0 │ 17 Dec 25 21:25 UTC │ 17 Dec 25 21:25 UTC │
	│ stop    │ -p kubernetes-upgrade-342357                                                                                                                  │ kubernetes-upgrade-342357 │ jenkins │ v1.37.0 │ 17 Dec 25 21:25 UTC │ 17 Dec 25 21:26 UTC │
	│ start   │ -p kubernetes-upgrade-342357 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-342357 │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │                     │
	│ delete  │ -p NoKubernetes-185508                                                                                                                        │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │ 17 Dec 25 21:26 UTC │
	│ start   │ -p NoKubernetes-185508 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                         │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │ 17 Dec 25 21:26 UTC │
	│ ssh     │ -p NoKubernetes-185508 sudo systemctl is-active --quiet service kubelet                                                                       │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │                     │
	│ stop    │ -p NoKubernetes-185508                                                                                                                        │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │ 17 Dec 25 21:26 UTC │
	│ start   │ -p NoKubernetes-185508 --driver=docker  --container-runtime=crio                                                                              │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │ 17 Dec 25 21:26 UTC │
	│ ssh     │ -p NoKubernetes-185508 sudo systemctl is-active --quiet service kubelet                                                                       │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │                     │
	│ delete  │ -p NoKubernetes-185508                                                                                                                        │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │ 17 Dec 25 21:26 UTC │
	│ start   │ -p stopped-upgrade-993252 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                          │ stopped-upgrade-993252    │ jenkins │ v1.35.0 │ 17 Dec 25 21:26 UTC │ 17 Dec 25 21:27 UTC │
	│ stop    │ stopped-upgrade-993252 stop                                                                                                                   │ stopped-upgrade-993252    │ jenkins │ v1.35.0 │ 17 Dec 25 21:27 UTC │ 17 Dec 25 21:27 UTC │
	│ start   │ -p stopped-upgrade-993252 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ stopped-upgrade-993252    │ jenkins │ v1.37.0 │ 17 Dec 25 21:27 UTC │ 17 Dec 25 21:31 UTC │
	│ delete  │ -p stopped-upgrade-993252                                                                                                                     │ stopped-upgrade-993252    │ jenkins │ v1.37.0 │ 17 Dec 25 21:31 UTC │ 17 Dec 25 21:31 UTC │
	│ start   │ -p running-upgrade-206976 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                          │ running-upgrade-206976    │ jenkins │ v1.35.0 │ 17 Dec 25 21:31 UTC │ 17 Dec 25 21:32 UTC │
	│ start   │ -p running-upgrade-206976 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ running-upgrade-206976    │ jenkins │ v1.37.0 │ 17 Dec 25 21:32 UTC │ 17 Dec 25 21:36 UTC │
	│ delete  │ -p running-upgrade-206976                                                                                                                     │ running-upgrade-206976    │ jenkins │ v1.37.0 │ 17 Dec 25 21:36 UTC │ 17 Dec 25 21:36 UTC │
	│ start   │ -p pause-918446 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                     │ pause-918446              │ jenkins │ v1.37.0 │ 17 Dec 25 21:36 UTC │ 17 Dec 25 21:37 UTC │
	│ start   │ -p pause-918446 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                              │ pause-918446              │ jenkins │ v1.37.0 │ 17 Dec 25 21:37 UTC │ 17 Dec 25 21:37 UTC │
	│ pause   │ -p pause-918446 --alsologtostderr -v=5                                                                                                        │ pause-918446              │ jenkins │ v1.37.0 │ 17 Dec 25 21:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 21:37:26
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 21:37:26.074742  705673 out.go:360] Setting OutFile to fd 1 ...
	I1217 21:37:26.075277  705673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:37:26.075311  705673 out.go:374] Setting ErrFile to fd 2...
	I1217 21:37:26.075331  705673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:37:26.075883  705673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 21:37:26.076415  705673 out.go:368] Setting JSON to false
	I1217 21:37:26.077503  705673 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15595,"bootTime":1765991851,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 21:37:26.077626  705673 start.go:143] virtualization:  
	I1217 21:37:26.080948  705673 out.go:179] * [pause-918446] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 21:37:26.084193  705673 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 21:37:26.084403  705673 notify.go:221] Checking for updates...
	I1217 21:37:26.091708  705673 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 21:37:26.094792  705673 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 21:37:26.100332  705673 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 21:37:26.103435  705673 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 21:37:26.106289  705673 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 21:37:26.109961  705673 config.go:182] Loaded profile config "pause-918446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 21:37:26.110618  705673 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 21:37:26.147818  705673 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 21:37:26.147985  705673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 21:37:26.209684  705673 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-17 21:37:26.200553189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 21:37:26.209794  705673 docker.go:319] overlay module found
	I1217 21:37:26.212982  705673 out.go:179] * Using the docker driver based on existing profile
	I1217 21:37:26.215900  705673 start.go:309] selected driver: docker
	I1217 21:37:26.215920  705673 start.go:927] validating driver "docker" against &{Name:pause-918446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-918446 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 21:37:26.216054  705673 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 21:37:26.216160  705673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 21:37:26.278118  705673 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-17 21:37:26.268834521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 21:37:26.278562  705673 cni.go:84] Creating CNI manager for ""
	I1217 21:37:26.278615  705673 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 21:37:26.278671  705673 start.go:353] cluster config:
	{Name:pause-918446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-918446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 21:37:26.283739  705673 out.go:179] * Starting "pause-918446" primary control-plane node in "pause-918446" cluster
	I1217 21:37:26.286625  705673 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 21:37:26.289711  705673 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 21:37:26.292702  705673 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 21:37:26.292773  705673 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1217 21:37:26.292788  705673 cache.go:65] Caching tarball of preloaded images
	I1217 21:37:26.292820  705673 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 21:37:26.292872  705673 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 21:37:26.292882  705673 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 21:37:26.293022  705673 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/config.json ...
	I1217 21:37:26.312986  705673 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 21:37:26.313009  705673 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 21:37:26.313028  705673 cache.go:243] Successfully downloaded all kic artifacts
	I1217 21:37:26.313058  705673 start.go:360] acquireMachinesLock for pause-918446: {Name:mk31914dae1555bb906adecd01310ccb2e7c2ac9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 21:37:26.313124  705673 start.go:364] duration metric: took 43.438µs to acquireMachinesLock for "pause-918446"
	I1217 21:37:26.313146  705673 start.go:96] Skipping create...Using existing machine configuration
	I1217 21:37:26.313157  705673 fix.go:54] fixHost starting: 
	I1217 21:37:26.313418  705673 cli_runner.go:164] Run: docker container inspect pause-918446 --format={{.State.Status}}
	I1217 21:37:26.330121  705673 fix.go:112] recreateIfNeeded on pause-918446: state=Running err=<nil>
	W1217 21:37:26.330155  705673 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 21:37:26.333397  705673 out.go:252] * Updating the running docker "pause-918446" container ...
	I1217 21:37:26.333431  705673 machine.go:94] provisionDockerMachine start ...
	I1217 21:37:26.333533  705673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918446
	I1217 21:37:26.351762  705673 main.go:143] libmachine: Using SSH client type: native
	I1217 21:37:26.352088  705673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1217 21:37:26.352103  705673 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 21:37:26.483028  705673 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-918446
	
	I1217 21:37:26.483055  705673 ubuntu.go:182] provisioning hostname "pause-918446"
	I1217 21:37:26.483129  705673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918446
	I1217 21:37:26.501031  705673 main.go:143] libmachine: Using SSH client type: native
	I1217 21:37:26.501353  705673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1217 21:37:26.501377  705673 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-918446 && echo "pause-918446" | sudo tee /etc/hostname
	I1217 21:37:26.640711  705673 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-918446
	
	I1217 21:37:26.640793  705673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918446
	I1217 21:37:26.660356  705673 main.go:143] libmachine: Using SSH client type: native
	I1217 21:37:26.660671  705673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1217 21:37:26.660693  705673 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-918446' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-918446/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-918446' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 21:37:26.796133  705673 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 21:37:26.796164  705673 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 21:37:26.796188  705673 ubuntu.go:190] setting up certificates
	I1217 21:37:26.796196  705673 provision.go:84] configureAuth start
	I1217 21:37:26.796261  705673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-918446
	I1217 21:37:26.814447  705673 provision.go:143] copyHostCerts
	I1217 21:37:26.814522  705673 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 21:37:26.814531  705673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 21:37:26.814607  705673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 21:37:26.814714  705673 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 21:37:26.814720  705673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 21:37:26.814744  705673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 21:37:26.814793  705673 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 21:37:26.814798  705673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 21:37:26.814819  705673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 21:37:26.814863  705673 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.pause-918446 san=[127.0.0.1 192.168.76.2 localhost minikube pause-918446]
	I1217 21:37:26.920588  705673 provision.go:177] copyRemoteCerts
	I1217 21:37:26.920658  705673 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 21:37:26.920705  705673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918446
	I1217 21:37:26.938513  705673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/pause-918446/id_rsa Username:docker}
	I1217 21:37:27.035588  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 21:37:27.052851  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 21:37:27.070854  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 21:37:27.088283  705673 provision.go:87] duration metric: took 292.064896ms to configureAuth
	I1217 21:37:27.088312  705673 ubuntu.go:206] setting minikube options for container-runtime
	I1217 21:37:27.088583  705673 config.go:182] Loaded profile config "pause-918446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 21:37:27.088697  705673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918446
	I1217 21:37:27.106211  705673 main.go:143] libmachine: Using SSH client type: native
	I1217 21:37:27.106528  705673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1217 21:37:27.106549  705673 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 21:37:32.460162  705673 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 21:37:32.460184  705673 machine.go:97] duration metric: took 6.126745744s to provisionDockerMachine
	I1217 21:37:32.460194  705673 start.go:293] postStartSetup for "pause-918446" (driver="docker")
	I1217 21:37:32.460205  705673 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 21:37:32.460263  705673 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 21:37:32.460299  705673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918446
	I1217 21:37:32.478458  705673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/pause-918446/id_rsa Username:docker}
	I1217 21:37:32.575727  705673 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 21:37:32.579073  705673 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 21:37:32.579109  705673 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 21:37:32.579121  705673 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 21:37:32.579175  705673 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 21:37:32.579261  705673 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 21:37:32.579362  705673 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 21:37:32.586847  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 21:37:32.604910  705673 start.go:296] duration metric: took 144.700759ms for postStartSetup
	I1217 21:37:32.605003  705673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 21:37:32.605057  705673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918446
	I1217 21:37:32.622648  705673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/pause-918446/id_rsa Username:docker}
	I1217 21:37:32.716928  705673 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 21:37:32.721761  705673 fix.go:56] duration metric: took 6.408597365s for fixHost
	I1217 21:37:32.721794  705673 start.go:83] releasing machines lock for "pause-918446", held for 6.408653021s
	I1217 21:37:32.721871  705673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-918446
	I1217 21:37:32.738219  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 21:37:32.738276  705673 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 21:37:32.738285  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 21:37:32.738315  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 21:37:32.738397  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 21:37:32.738430  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 21:37:32.738480  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 21:37:32.738551  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 21:37:32.738604  705673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918446
	I1217 21:37:32.755283  705673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/pause-918446/id_rsa Username:docker}
	I1217 21:37:32.862682  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 21:37:32.880415  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 21:37:32.898787  705673 ssh_runner.go:195] Run: openssl version
	I1217 21:37:32.905580  705673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:37:32.913224  705673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 21:37:32.920775  705673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:37:32.924976  705673 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:37:32.925064  705673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:37:32.968258  705673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 21:37:32.976064  705673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 21:37:32.983385  705673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 21:37:32.990733  705673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 21:37:32.994684  705673 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 21:37:32.994797  705673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 21:37:33.041049  705673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 21:37:33.048954  705673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 21:37:33.056188  705673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 21:37:33.064060  705673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 21:37:33.067809  705673 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 21:37:33.067881  705673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 21:37:33.109462  705673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 21:37:33.117034  705673 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 21:37:33.120804  705673 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 21:37:33.124612  705673 ssh_runner.go:195] Run: cat /version.json
	I1217 21:37:33.124725  705673 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 21:37:33.215080  705673 ssh_runner.go:195] Run: systemctl --version
	I1217 21:37:33.221700  705673 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 21:37:33.266897  705673 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 21:37:33.271327  705673 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 21:37:33.271466  705673 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 21:37:33.280665  705673 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 21:37:33.280694  705673 start.go:496] detecting cgroup driver to use...
	I1217 21:37:33.280738  705673 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 21:37:33.280817  705673 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 21:37:33.296453  705673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 21:37:33.309895  705673 docker.go:218] disabling cri-docker service (if available) ...
	I1217 21:37:33.310011  705673 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 21:37:33.325353  705673 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 21:37:33.338446  705673 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 21:37:33.478397  705673 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 21:37:33.623449  705673 docker.go:234] disabling docker service ...
	I1217 21:37:33.623559  705673 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 21:37:33.639648  705673 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 21:37:33.652744  705673 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 21:37:33.790771  705673 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 21:37:33.926157  705673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 21:37:33.939492  705673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 21:37:33.954734  705673 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 21:37:33.954802  705673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:37:33.964683  705673 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 21:37:33.964746  705673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:37:33.973796  705673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:37:33.982965  705673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:37:33.991843  705673 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 21:37:34.001986  705673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:37:34.012941  705673 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:37:34.022348  705673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:37:34.032191  705673 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 21:37:34.040857  705673 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 21:37:34.049084  705673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 21:37:34.180369  705673 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 21:37:34.965607  705673 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 21:37:34.965728  705673 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 21:37:34.970148  705673 start.go:564] Will wait 60s for crictl version
	I1217 21:37:34.970261  705673 ssh_runner.go:195] Run: which crictl
	I1217 21:37:34.975549  705673 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 21:37:35.018685  705673 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 21:37:35.018837  705673 ssh_runner.go:195] Run: crio --version
	I1217 21:37:35.079723  705673 ssh_runner.go:195] Run: crio --version
	I1217 21:37:35.169373  705673 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 21:37:35.173460  705673 cli_runner.go:164] Run: docker network inspect pause-918446 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 21:37:35.190806  705673 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 21:37:35.199093  705673 kubeadm.go:884] updating cluster {Name:pause-918446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-918446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 21:37:35.199235  705673 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 21:37:35.199284  705673 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 21:37:35.261122  705673 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 21:37:35.261141  705673 crio.go:433] Images already preloaded, skipping extraction
	I1217 21:37:35.261198  705673 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 21:37:35.321080  705673 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 21:37:35.321100  705673 cache_images.go:86] Images are preloaded, skipping loading
	I1217 21:37:35.321107  705673 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 crio true true} ...
	I1217 21:37:35.321214  705673 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-918446 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-918446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 21:37:35.321296  705673 ssh_runner.go:195] Run: crio config
	I1217 21:37:35.444147  705673 cni.go:84] Creating CNI manager for ""
	I1217 21:37:35.444219  705673 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 21:37:35.444246  705673 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 21:37:35.444299  705673 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-918446 NodeName:pause-918446 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 21:37:35.444477  705673 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-918446"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 21:37:35.444591  705673 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 21:37:35.459662  705673 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 21:37:35.459796  705673 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 21:37:35.468446  705673 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1217 21:37:35.489674  705673 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 21:37:35.508359  705673 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1217 21:37:35.527943  705673 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 21:37:35.534486  705673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 21:37:35.764213  705673 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 21:37:35.781868  705673 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446 for IP: 192.168.76.2
	I1217 21:37:35.781887  705673 certs.go:195] generating shared ca certs ...
	I1217 21:37:35.781903  705673 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 21:37:35.782052  705673 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 21:37:35.782106  705673 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 21:37:35.782113  705673 certs.go:257] generating profile certs ...
	I1217 21:37:35.782201  705673 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/client.key
	I1217 21:37:35.782271  705673 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/apiserver.key.3381b907
	I1217 21:37:35.782312  705673 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/proxy-client.key
	I1217 21:37:35.782431  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 21:37:35.782465  705673 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 21:37:35.782474  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 21:37:35.782503  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 21:37:35.782525  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 21:37:35.782547  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 21:37:35.782591  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 21:37:35.783201  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 21:37:35.815831  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 21:37:35.890151  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 21:37:35.935646  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 21:37:35.967614  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 21:37:35.995317  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 21:37:36.028886  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 21:37:36.057940  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 21:37:36.106214  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 21:37:36.152026  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 21:37:36.191210  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 21:37:36.220838  705673 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 21:37:36.245208  705673 ssh_runner.go:195] Run: openssl version
	I1217 21:37:36.253382  705673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:37:36.264747  705673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 21:37:36.276949  705673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:37:36.280764  705673 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:37:36.280827  705673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:37:36.352971  705673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 21:37:36.365321  705673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 21:37:36.377885  705673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 21:37:36.390489  705673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 21:37:36.394887  705673 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 21:37:36.395005  705673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 21:37:36.441075  705673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 21:37:36.449659  705673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 21:37:36.457505  705673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 21:37:36.465736  705673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 21:37:36.469932  705673 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 21:37:36.470053  705673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 21:37:36.518142  705673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 21:37:36.526780  705673 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 21:37:36.531440  705673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 21:37:36.581700  705673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 21:37:36.649082  705673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 21:37:36.703117  705673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 21:37:36.749423  705673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 21:37:36.808266  705673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 21:37:36.868122  705673 kubeadm.go:401] StartCluster: {Name:pause-918446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-918446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 21:37:36.868320  705673 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 21:37:36.868419  705673 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 21:37:36.910077  705673 cri.go:89] found id: "13b62765e940b707f2109f867ff607b28ee2b00bc649622df13dd82104b94781"
	I1217 21:37:36.910150  705673 cri.go:89] found id: "2a5ce624f1a768e61331b170f438d08748328a6bad10cd0836782475886d0f78"
	I1217 21:37:36.910169  705673 cri.go:89] found id: "730d2f116ff8547a48b3b177ef0205a4285f5b1a2d27d3af70f6c52dce002c86"
	I1217 21:37:36.910186  705673 cri.go:89] found id: "8a445923145e34da32a48263e4db5c4034994c139614d9318bf0059d4f765b78"
	I1217 21:37:36.910220  705673 cri.go:89] found id: "dea8eca7161c61cb894081fc18ba1333ebfce11ca17673be95f8ae09baa586f7"
	I1217 21:37:36.910242  705673 cri.go:89] found id: "1cca7e7499270d5a13c177bfb97573b991962b599df6a6ce2260aed393abbb0d"
	I1217 21:37:36.910261  705673 cri.go:89] found id: "435b580572a8bd0462449f84c02d6482d96d57145cf44d953cdbce897db99730"
	I1217 21:37:36.910280  705673 cri.go:89] found id: "1eb61928f5d77c91d1c42c08faf26efa6f642b7bbeb923ce2dd2d46594c88b3d"
	I1217 21:37:36.910309  705673 cri.go:89] found id: "eecdfd5180ffaad58834ff194e5867c5f6eabf3b2f43f4e5c424692c8376e31c"
	I1217 21:37:36.910336  705673 cri.go:89] found id: "776d553a4c8267d516c1d7cd7f0f211d2fd8a6fbb10a89792e1dab3050e69a60"
	I1217 21:37:36.910356  705673 cri.go:89] found id: "e54d9f4786a1252ba2f1afa2ea94a40c28d6ddec95c5afb31c0369d80e532d7f"
	I1217 21:37:36.910390  705673 cri.go:89] found id: "d2cb624370de704a2f2fb8a42c2d5a2a132c9c0f57353890ebc7ffa8d923605a"
	I1217 21:37:36.910414  705673 cri.go:89] found id: "dde6c95ebd91853deb5bebbd95070104e1b043bdf297eae1114f0db44dd281ab"
	I1217 21:37:36.910433  705673 cri.go:89] found id: ""
	I1217 21:37:36.910517  705673 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 21:37:36.928245  705673 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T21:37:36Z" level=error msg="open /run/runc: no such file or directory"
	I1217 21:37:36.928323  705673 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 21:37:36.940602  705673 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 21:37:36.940671  705673 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 21:37:36.940753  705673 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 21:37:36.952681  705673 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 21:37:36.953453  705673 kubeconfig.go:125] found "pause-918446" server: "https://192.168.76.2:8443"
	I1217 21:37:36.954423  705673 kapi.go:59] client config for pause-918446: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 21:37:36.955225  705673 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 21:37:36.955346  705673 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 21:37:36.955374  705673 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 21:37:36.955393  705673 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 21:37:36.955434  705673 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 21:37:36.955899  705673 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 21:37:36.965512  705673 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1217 21:37:36.965546  705673 kubeadm.go:602] duration metric: took 24.855696ms to restartPrimaryControlPlane
	I1217 21:37:36.965556  705673 kubeadm.go:403] duration metric: took 97.444781ms to StartCluster
	I1217 21:37:36.965571  705673 settings.go:142] acquiring lock: {Name:mk91fac3a58d91b836cd0701183ef7cc1e571672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 21:37:36.965646  705673 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 21:37:36.966512  705673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/kubeconfig: {Name:mkc7214551fa855d6d4575d627c3ecd54291b351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 21:37:36.966760  705673 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 21:37:36.966962  705673 config.go:182] Loaded profile config "pause-918446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 21:37:36.967010  705673 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 21:37:36.970931  705673 out.go:179] * Verifying Kubernetes components...
	I1217 21:37:36.970931  705673 out.go:179] * Enabled addons: 
	I1217 21:37:36.973772  705673 addons.go:530] duration metric: took 6.75825ms for enable addons: enabled=[]
	I1217 21:37:36.973817  705673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 21:37:37.198585  705673 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 21:37:37.213391  705673 node_ready.go:35] waiting up to 6m0s for node "pause-918446" to be "Ready" ...
	I1217 21:37:40.065155  705673 node_ready.go:49] node "pause-918446" is "Ready"
	I1217 21:37:40.065243  705673 node_ready.go:38] duration metric: took 2.851823357s for node "pause-918446" to be "Ready" ...
	I1217 21:37:40.065273  705673 api_server.go:52] waiting for apiserver process to appear ...
	I1217 21:37:40.065384  705673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:37:40.081131  705673 api_server.go:72] duration metric: took 3.114337649s to wait for apiserver process to appear ...
	I1217 21:37:40.081159  705673 api_server.go:88] waiting for apiserver healthz status ...
	I1217 21:37:40.081180  705673 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 21:37:40.102321  705673 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 21:37:40.102355  705673 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 21:37:40.581994  705673 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 21:37:40.594977  705673 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 21:37:40.595008  705673 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 21:37:41.081990  705673 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 21:37:41.091477  705673 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 21:37:41.091505  705673 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 21:37:41.582156  705673 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 21:37:41.591975  705673 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1217 21:37:41.593124  705673 api_server.go:141] control plane version: v1.34.3
	I1217 21:37:41.593146  705673 api_server.go:131] duration metric: took 1.511979775s to wait for apiserver health ...
	I1217 21:37:41.593155  705673 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 21:37:41.596774  705673 system_pods.go:59] 7 kube-system pods found
	I1217 21:37:41.596802  705673 system_pods.go:61] "coredns-66bc5c9577-jtb8l" [1134b539-53f9-4702-a716-ed4285b2123e] Running
	I1217 21:37:41.596811  705673 system_pods.go:61] "etcd-pause-918446" [da32e969-69c3-4d9a-8e22-1066dc76312d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 21:37:41.596829  705673 system_pods.go:61] "kindnet-7v8zq" [3f504d98-d9d6-494d-8e63-da19b280fbb4] Running
	I1217 21:37:41.596843  705673 system_pods.go:61] "kube-apiserver-pause-918446" [7b6f7d5a-18f9-40c1-b1d1-01e98ca5a8db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 21:37:41.596851  705673 system_pods.go:61] "kube-controller-manager-pause-918446" [49ba563b-1e62-4e32-9838-11a97e13107f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 21:37:41.596856  705673 system_pods.go:61] "kube-proxy-w6lj6" [c0ee899c-83cb-4b55-a66f-7ddad08cb670] Running
	I1217 21:37:41.596861  705673 system_pods.go:61] "kube-scheduler-pause-918446" [85095cb2-ddfb-463b-be8f-ae8f0e29ab69] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 21:37:41.596867  705673 system_pods.go:74] duration metric: took 3.706369ms to wait for pod list to return data ...
	I1217 21:37:41.596874  705673 default_sa.go:34] waiting for default service account to be created ...
	I1217 21:37:41.599423  705673 default_sa.go:45] found service account: "default"
	I1217 21:37:41.599442  705673 default_sa.go:55] duration metric: took 2.561531ms for default service account to be created ...
	I1217 21:37:41.599450  705673 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 21:37:41.602858  705673 system_pods.go:86] 7 kube-system pods found
	I1217 21:37:41.602926  705673 system_pods.go:89] "coredns-66bc5c9577-jtb8l" [1134b539-53f9-4702-a716-ed4285b2123e] Running
	I1217 21:37:41.602952  705673 system_pods.go:89] "etcd-pause-918446" [da32e969-69c3-4d9a-8e22-1066dc76312d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 21:37:41.602973  705673 system_pods.go:89] "kindnet-7v8zq" [3f504d98-d9d6-494d-8e63-da19b280fbb4] Running
	I1217 21:37:41.603013  705673 system_pods.go:89] "kube-apiserver-pause-918446" [7b6f7d5a-18f9-40c1-b1d1-01e98ca5a8db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 21:37:41.603042  705673 system_pods.go:89] "kube-controller-manager-pause-918446" [49ba563b-1e62-4e32-9838-11a97e13107f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 21:37:41.603071  705673 system_pods.go:89] "kube-proxy-w6lj6" [c0ee899c-83cb-4b55-a66f-7ddad08cb670] Running
	I1217 21:37:41.603112  705673 system_pods.go:89] "kube-scheduler-pause-918446" [85095cb2-ddfb-463b-be8f-ae8f0e29ab69] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 21:37:41.603140  705673 system_pods.go:126] duration metric: took 3.676559ms to wait for k8s-apps to be running ...
	I1217 21:37:41.603174  705673 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 21:37:41.603269  705673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 21:37:41.617951  705673 system_svc.go:56] duration metric: took 14.755913ms WaitForService to wait for kubelet
	I1217 21:37:41.618033  705673 kubeadm.go:587] duration metric: took 4.651245186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 21:37:41.618069  705673 node_conditions.go:102] verifying NodePressure condition ...
	I1217 21:37:41.622060  705673 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 21:37:41.622149  705673 node_conditions.go:123] node cpu capacity is 2
	I1217 21:37:41.622177  705673 node_conditions.go:105] duration metric: took 4.090011ms to run NodePressure ...
	I1217 21:37:41.622203  705673 start.go:242] waiting for startup goroutines ...
	I1217 21:37:41.622238  705673 start.go:247] waiting for cluster config update ...
	I1217 21:37:41.622265  705673 start.go:256] writing updated cluster config ...
	I1217 21:37:41.622641  705673 ssh_runner.go:195] Run: rm -f paused
	I1217 21:37:41.626955  705673 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 21:37:41.627786  705673 kapi.go:59] client config for pause-918446: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 21:37:41.631276  705673 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jtb8l" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 21:37:43.636539  705673 pod_ready.go:104] pod "coredns-66bc5c9577-jtb8l" is not "Ready", error: node "pause-918446" hosting pod "coredns-66bc5c9577-jtb8l" is not "Ready" (will retry)
	W1217 21:37:45.636694  705673 pod_ready.go:104] pod "coredns-66bc5c9577-jtb8l" is not "Ready", error: node "pause-918446" hosting pod "coredns-66bc5c9577-jtb8l" is not "Ready" (will retry)
	W1217 21:37:47.637044  705673 pod_ready.go:104] pod "coredns-66bc5c9577-jtb8l" is not "Ready", error: node "pause-918446" hosting pod "coredns-66bc5c9577-jtb8l" is not "Ready" (will retry)
	W1217 21:37:49.637244  705673 pod_ready.go:104] pod "coredns-66bc5c9577-jtb8l" is not "Ready", error: node "pause-918446" hosting pod "coredns-66bc5c9577-jtb8l" is not "Ready" (will retry)
	I1217 21:37:50.636549  705673 pod_ready.go:94] pod "coredns-66bc5c9577-jtb8l" is "Ready"
	I1217 21:37:50.636578  705673 pod_ready.go:86] duration metric: took 9.005234069s for pod "coredns-66bc5c9577-jtb8l" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:50.639347  705673 pod_ready.go:83] waiting for pod "etcd-pause-918446" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:50.643776  705673 pod_ready.go:94] pod "etcd-pause-918446" is "Ready"
	I1217 21:37:50.643809  705673 pod_ready.go:86] duration metric: took 4.437386ms for pod "etcd-pause-918446" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:50.646492  705673 pod_ready.go:83] waiting for pod "kube-apiserver-pause-918446" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:50.651524  705673 pod_ready.go:94] pod "kube-apiserver-pause-918446" is "Ready"
	I1217 21:37:50.651556  705673 pod_ready.go:86] duration metric: took 5.036719ms for pod "kube-apiserver-pause-918446" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:50.654254  705673 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-918446" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:51.659568  705673 pod_ready.go:94] pod "kube-controller-manager-pause-918446" is "Ready"
	I1217 21:37:51.659627  705673 pod_ready.go:86] duration metric: took 1.005347445s for pod "kube-controller-manager-pause-918446" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:51.834608  705673 pod_ready.go:83] waiting for pod "kube-proxy-w6lj6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:52.234688  705673 pod_ready.go:94] pod "kube-proxy-w6lj6" is "Ready"
	I1217 21:37:52.234719  705673 pod_ready.go:86] duration metric: took 400.08369ms for pod "kube-proxy-w6lj6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:52.434992  705673 pod_ready.go:83] waiting for pod "kube-scheduler-pause-918446" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:52.834078  705673 pod_ready.go:94] pod "kube-scheduler-pause-918446" is "Ready"
	I1217 21:37:52.834104  705673 pod_ready.go:86] duration metric: took 399.085513ms for pod "kube-scheduler-pause-918446" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:52.834116  705673 pod_ready.go:40] duration metric: took 11.207082926s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 21:37:52.888091  705673 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1217 21:37:52.891233  705673 out.go:179] * Done! kubectl is now configured to use "pause-918446" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 21:37:35 pause-918446 crio[2183]: time="2025-12-17T21:37:35.357610021Z" level=info msg="Started container" PID=2305 containerID=2a5ce624f1a768e61331b170f438d08748328a6bad10cd0836782475886d0f78 description=kube-system/kube-scheduler-pause-918446/kube-scheduler id=eda0c2b7-be52-4337-9e24-dd4c2454dd6f name=/runtime.v1.RuntimeService/StartContainer sandboxID=fc567db1c47d4e1fb7dcf06c461d88b990ae3e2b6bfd9af1e10f924817b674e3
	Dec 17 21:37:35 pause-918446 crio[2183]: time="2025-12-17T21:37:35.360481955Z" level=info msg="Created container 13b62765e940b707f2109f867ff607b28ee2b00bc649622df13dd82104b94781: kube-system/etcd-pause-918446/etcd" id=07adadd2-5442-47f0-9592-3445cf6555b2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 21:37:35 pause-918446 crio[2183]: time="2025-12-17T21:37:35.361296698Z" level=info msg="Starting container: 13b62765e940b707f2109f867ff607b28ee2b00bc649622df13dd82104b94781" id=4b21826a-f2c0-4fbb-ba5c-7a5ba6594268 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 21:37:35 pause-918446 crio[2183]: time="2025-12-17T21:37:35.370509967Z" level=info msg="Started container" PID=2318 containerID=13b62765e940b707f2109f867ff607b28ee2b00bc649622df13dd82104b94781 description=kube-system/etcd-pause-918446/etcd id=4b21826a-f2c0-4fbb-ba5c-7a5ba6594268 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a34c863f628eb30c38a9b70f525e1180e9949aeb6e7ac06e0e3855a903a26f4
	Dec 17 21:37:40 pause-918446 crio[2183]: time="2025-12-17T21:37:40.785583247Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=a24a9e48-4574-4186-8b68-462d971281c4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 21:37:40 pause-918446 crio[2183]: time="2025-12-17T21:37:40.789140601Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=6181d630-8f31-431c-a7dd-172199bd81d0 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 21:37:40 pause-918446 crio[2183]: time="2025-12-17T21:37:40.792452284Z" level=info msg="Creating container: kube-system/coredns-66bc5c9577-jtb8l/coredns" id=aa91197a-89a5-45de-bf69-c8eaf95727ed name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 21:37:40 pause-918446 crio[2183]: time="2025-12-17T21:37:40.792644614Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 21:37:40 pause-918446 crio[2183]: time="2025-12-17T21:37:40.80394595Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 21:37:40 pause-918446 crio[2183]: time="2025-12-17T21:37:40.805204282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 21:37:40 pause-918446 crio[2183]: time="2025-12-17T21:37:40.847072295Z" level=info msg="Created container 715a5b22a17b4ea89d5c5df3718aaa8b543f9ef2d94f5b5ca36d804ac9cbc1e2: kube-system/coredns-66bc5c9577-jtb8l/coredns" id=aa91197a-89a5-45de-bf69-c8eaf95727ed name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 21:37:40 pause-918446 crio[2183]: time="2025-12-17T21:37:40.848789413Z" level=info msg="Starting container: 715a5b22a17b4ea89d5c5df3718aaa8b543f9ef2d94f5b5ca36d804ac9cbc1e2" id=223854ae-ff90-485e-b193-226efe8fe3f0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 21:37:40 pause-918446 crio[2183]: time="2025-12-17T21:37:40.850939215Z" level=info msg="Started container" PID=2650 containerID=715a5b22a17b4ea89d5c5df3718aaa8b543f9ef2d94f5b5ca36d804ac9cbc1e2 description=kube-system/coredns-66bc5c9577-jtb8l/coredns id=223854ae-ff90-485e-b193-226efe8fe3f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ad462819e64e34ed679a1204068ecfb5cdbfb8fff6ae03b8c7bd71c5edc4de25
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.536306065Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.539849331Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.539884491Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.539908934Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.543136907Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.54317324Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.543199726Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.546418994Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.546454793Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.546479097Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.549812884Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.549858768Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	715a5b22a17b4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                     14 seconds ago      Running             coredns                   1                   ad462819e64e3       coredns-66bc5c9577-jtb8l               kube-system
	13b62765e940b       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                     20 seconds ago      Running             etcd                      1                   3a34c863f628e       etcd-pause-918446                      kube-system
	2a5ce624f1a76       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                     20 seconds ago      Running             kube-scheduler            1                   fc567db1c47d4       kube-scheduler-pause-918446            kube-system
	730d2f116ff85       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                     20 seconds ago      Running             kube-apiserver            1                   bf6bf1e772ec9       kube-apiserver-pause-918446            kube-system
	8a445923145e3       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                     20 seconds ago      Running             kube-controller-manager   1                   ffd31e7a7fc83       kube-controller-manager-pause-918446   kube-system
	dea8eca7161c6       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     20 seconds ago      Running             kindnet-cni               1                   cf3fe7d1c52fd       kindnet-7v8zq                          kube-system
	1cca7e7499270       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                     20 seconds ago      Running             kube-proxy                1                   2f68041d0ad67       kube-proxy-w6lj6                       kube-system
	435b580572a8b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                     32 seconds ago      Exited              coredns                   0                   ad462819e64e3       coredns-66bc5c9577-jtb8l               kube-system
	1eb61928f5d77       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   43 seconds ago      Exited              kindnet-cni               0                   cf3fe7d1c52fd       kindnet-7v8zq                          kube-system
	eecdfd5180ffa       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                     45 seconds ago      Exited              kube-proxy                0                   2f68041d0ad67       kube-proxy-w6lj6                       kube-system
	776d553a4c826       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                     57 seconds ago      Exited              kube-apiserver            0                   bf6bf1e772ec9       kube-apiserver-pause-918446            kube-system
	e54d9f4786a12       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                     57 seconds ago      Exited              etcd                      0                   3a34c863f628e       etcd-pause-918446                      kube-system
	d2cb624370de7       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                     57 seconds ago      Exited              kube-controller-manager   0                   ffd31e7a7fc83       kube-controller-manager-pause-918446   kube-system
	dde6c95ebd918       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                     57 seconds ago      Exited              kube-scheduler            0                   fc567db1c47d4       kube-scheduler-pause-918446            kube-system
	
	
	==> coredns [435b580572a8bd0462449f84c02d6482d96d57145cf44d953cdbce897db99730] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59303 - 57484 "HINFO IN 3280441896193278699.7977930218880289109. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013505613s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [715a5b22a17b4ea89d5c5df3718aaa8b543f9ef2d94f5b5ca36d804ac9cbc1e2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46918 - 936 "HINFO IN 2769384511658514303.6415519256184776346. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016222386s
	
	
	==> describe nodes <==
	Name:               pause-918446
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-918446
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=pause-918446
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T21_37_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 21:37:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-918446
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 21:37:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 21:37:50 +0000   Wed, 17 Dec 2025 21:36:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 21:37:50 +0000   Wed, 17 Dec 2025 21:36:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 21:37:50 +0000   Wed, 17 Dec 2025 21:36:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 21:37:50 +0000   Wed, 17 Dec 2025 21:37:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-918446
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                dff8bef5-c1ae-4015-acdc-8ca26dcdb9b8
	  Boot ID:                    7e62835b-3b3c-4293-85c7-fb0aa46e4d00
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-jtb8l                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     45s
	  kube-system                 etcd-pause-918446                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         50s
	  kube-system                 kindnet-7v8zq                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      45s
	  kube-system                 kube-apiserver-pause-918446             250m (12%)    0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-controller-manager-pause-918446    200m (10%)    0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-proxy-w6lj6                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-scheduler-pause-918446             100m (5%)     0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 44s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Normal   NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node pause-918446 status is now: NodeHasSufficientMemory
	  Normal   Starting                 58s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 58s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node pause-918446 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node pause-918446 status is now: NodeHasSufficientPID
	  Normal   Starting                 51s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 51s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  50s                kubelet          Node pause-918446 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    50s                kubelet          Node pause-918446 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     50s                kubelet          Node pause-918446 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           46s                node-controller  Node pause-918446 event: Registered Node pause-918446 in Controller
	  Normal   NodeNotReady             20s                kubelet          Node pause-918446 status is now: NodeNotReady
	  Normal   RegisteredNode           12s                node-controller  Node pause-918446 event: Registered Node pause-918446 in Controller
	  Normal   NodeReady                5s (x2 over 32s)   kubelet          Node pause-918446 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec17 20:59] overlayfs: idmapped layers are currently not supported
	[Dec17 21:00] overlayfs: idmapped layers are currently not supported
	[Dec17 21:04] overlayfs: idmapped layers are currently not supported
	[  +3.938873] overlayfs: idmapped layers are currently not supported
	[Dec17 21:05] overlayfs: idmapped layers are currently not supported
	[Dec17 21:06] overlayfs: idmapped layers are currently not supported
	[Dec17 21:08] overlayfs: idmapped layers are currently not supported
	[Dec17 21:12] overlayfs: idmapped layers are currently not supported
	[Dec17 21:13] overlayfs: idmapped layers are currently not supported
	[Dec17 21:14] overlayfs: idmapped layers are currently not supported
	[ +43.653071] overlayfs: idmapped layers are currently not supported
	[Dec17 21:15] overlayfs: idmapped layers are currently not supported
	[Dec17 21:16] overlayfs: idmapped layers are currently not supported
	[Dec17 21:17] overlayfs: idmapped layers are currently not supported
	[  +0.555481] overlayfs: idmapped layers are currently not supported
	[Dec17 21:18] overlayfs: idmapped layers are currently not supported
	[ +18.618704] overlayfs: idmapped layers are currently not supported
	[Dec17 21:19] overlayfs: idmapped layers are currently not supported
	[ +26.163757] overlayfs: idmapped layers are currently not supported
	[Dec17 21:20] overlayfs: idmapped layers are currently not supported
	[Dec17 21:21] kauditd_printk_skb: 8 callbacks suppressed
	[ +22.921341] overlayfs: idmapped layers are currently not supported
	[Dec17 21:24] overlayfs: idmapped layers are currently not supported
	[Dec17 21:25] overlayfs: idmapped layers are currently not supported
	[Dec17 21:36] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [13b62765e940b707f2109f867ff607b28ee2b00bc649622df13dd82104b94781] <==
	{"level":"warn","ts":"2025-12-17T21:37:38.395012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.408031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.431153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.452223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.477492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.488367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.521991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.526495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.543272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.557326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.628459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.646729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.669196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.682228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.708265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.732272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.750113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.792624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.794183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.800611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.828720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.861056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.876852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.900363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.966580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42810","server-name":"","error":"EOF"}
	
	
	==> etcd [e54d9f4786a1252ba2f1afa2ea94a40c28d6ddec95c5afb31c0369d80e532d7f] <==
	{"level":"warn","ts":"2025-12-17T21:37:01.371405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:01.384561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:01.408845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:01.427178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:01.442279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:01.464828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:01.529270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59260","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T21:37:27.256878Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-17T21:37:27.256954Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-918446","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-12-17T21:37:27.257070Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T21:37:27.538348Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T21:37:27.538443Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T21:37:27.538466Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-12-17T21:37:27.538497Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-17T21:37:27.538599Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-17T21:37:27.538640Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T21:37:27.538662Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T21:37:27.538673Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-17T21:37:27.538587Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T21:37:27.538687Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T21:37:27.538695Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T21:37:27.541929Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-12-17T21:37:27.542028Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T21:37:27.542114Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-17T21:37:27.542143Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-918446","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 21:37:55 up  4:20,  0 user,  load average: 2.14, 1.54, 1.74
	Linux pause-918446 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1eb61928f5d77c91d1c42c08faf26efa6f642b7bbeb923ce2dd2d46594c88b3d] <==
	I1217 21:37:12.513394       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 21:37:12.514591       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1217 21:37:12.514771       1 main.go:148] setting mtu 1500 for CNI 
	I1217 21:37:12.514791       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 21:37:12.514805       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T21:37:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 21:37:12.719240       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 21:37:12.724031       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 21:37:12.725393       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 21:37:12.725554       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 21:37:12.913665       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 21:37:12.917673       1 metrics.go:72] Registering metrics
	I1217 21:37:12.917844       1 controller.go:711] "Syncing nftables rules"
	I1217 21:37:22.723715       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 21:37:22.723787       1 main.go:301] handling current node
	
	
	==> kindnet [dea8eca7161c61cb894081fc18ba1333ebfce11ca17673be95f8ae09baa586f7] <==
	I1217 21:37:35.328461       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 21:37:35.328655       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1217 21:37:35.328787       1 main.go:148] setting mtu 1500 for CNI 
	I1217 21:37:35.328799       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 21:37:35.328809       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T21:37:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 21:37:35.535248       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 21:37:35.535514       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 21:37:35.535562       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 21:37:35.553766       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1217 21:37:35.554744       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1217 21:37:40.159685       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 21:37:40.159779       1 metrics.go:72] Registering metrics
	I1217 21:37:40.159867       1 controller.go:711] "Syncing nftables rules"
	I1217 21:37:45.535861       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 21:37:45.535944       1 main.go:301] handling current node
	I1217 21:37:55.536022       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 21:37:55.536066       1 main.go:301] handling current node
	
	
	==> kube-apiserver [730d2f116ff8547a48b3b177ef0205a4285f5b1a2d27d3af70f6c52dce002c86] <==
	I1217 21:37:40.078251       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 21:37:40.099708       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 21:37:40.106392       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 21:37:40.106471       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 21:37:40.118742       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 21:37:40.119020       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 21:37:40.119151       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1217 21:37:40.119480       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 21:37:40.119677       1 aggregator.go:171] initial CRD sync complete...
	I1217 21:37:40.119719       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 21:37:40.119748       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 21:37:40.119777       1 cache.go:39] Caches are synced for autoregister controller
	I1217 21:37:40.126544       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 21:37:40.131643       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 21:37:40.139181       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 21:37:40.139647       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 21:37:40.151820       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1217 21:37:40.158738       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 21:37:40.713086       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 21:37:42.073599       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 21:37:43.685117       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 21:37:43.736672       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 21:37:43.785069       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 21:37:43.836360       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 21:37:43.939509       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [776d553a4c8267d516c1d7cd7f0f211d2fd8a6fbb10a89792e1dab3050e69a60] <==
	W1217 21:37:27.282039       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.282116       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.282219       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.282319       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.282401       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.282486       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.285747       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.285910       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286154       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286219       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286277       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286333       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286389       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286440       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286492       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286548       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286602       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286658       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286714       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286785       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286848       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286911       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286965       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.287294       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.287377       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [8a445923145e34da32a48263e4db5c4034994c139614d9318bf0059d4f765b78] <==
	I1217 21:37:43.460830       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 21:37:43.462288       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 21:37:43.465704       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 21:37:43.468039       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 21:37:43.478579       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 21:37:43.478588       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 21:37:43.478606       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 21:37:43.478621       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 21:37:43.478636       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 21:37:43.478650       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 21:37:43.478662       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 21:37:43.479680       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 21:37:43.479737       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 21:37:43.480952       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1217 21:37:43.486181       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1217 21:37:43.486307       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1217 21:37:43.486390       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-918446"
	I1217 21:37:43.486441       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1217 21:37:43.491710       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1217 21:37:43.491760       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 21:37:43.496014       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 21:37:43.501341       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 21:37:43.503856       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 21:37:43.512518       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 21:37:53.487732       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [d2cb624370de704a2f2fb8a42c2d5a2a132c9c0f57353890ebc7ffa8d923605a] <==
	I1217 21:37:09.265055       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 21:37:09.275741       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 21:37:09.281301       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 21:37:09.284663       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1217 21:37:09.284755       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1217 21:37:09.287222       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 21:37:09.287380       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 21:37:09.287419       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 21:37:09.287448       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 21:37:09.287541       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 21:37:09.287717       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1217 21:37:09.287705       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 21:37:09.287975       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 21:37:09.288068       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 21:37:09.289211       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 21:37:09.289295       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 21:37:09.290327       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 21:37:09.290439       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 21:37:09.290478       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 21:37:09.292269       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 21:37:09.296442       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 21:37:09.305565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 21:37:09.316902       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 21:37:09.341083       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 21:37:24.242639       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1cca7e7499270d5a13c177bfb97573b991962b599df6a6ce2260aed393abbb0d] <==
	I1217 21:37:35.230260       1 server_linux.go:53] "Using iptables proxy"
	I1217 21:37:36.752604       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 21:37:40.121247       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 21:37:40.121290       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1217 21:37:40.121370       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 21:37:40.408085       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 21:37:40.408203       1 server_linux.go:132] "Using iptables Proxier"
	I1217 21:37:40.412862       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 21:37:40.413204       1 server.go:527] "Version info" version="v1.34.3"
	I1217 21:37:40.413388       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 21:37:40.414655       1 config.go:200] "Starting service config controller"
	I1217 21:37:40.414719       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 21:37:40.414762       1 config.go:106] "Starting endpoint slice config controller"
	I1217 21:37:40.414790       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 21:37:40.414847       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 21:37:40.414874       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 21:37:40.415645       1 config.go:309] "Starting node config controller"
	I1217 21:37:40.415695       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 21:37:40.415724       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 21:37:40.515892       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 21:37:40.515987       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 21:37:40.516874       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [eecdfd5180ffaad58834ff194e5867c5f6eabf3b2f43f4e5c424692c8376e31c] <==
	I1217 21:37:10.751128       1 server_linux.go:53] "Using iptables proxy"
	I1217 21:37:10.853733       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 21:37:10.954988       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 21:37:10.955063       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1217 21:37:10.955150       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 21:37:11.001415       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 21:37:11.001496       1 server_linux.go:132] "Using iptables Proxier"
	I1217 21:37:11.011072       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 21:37:11.011429       1 server.go:527] "Version info" version="v1.34.3"
	I1217 21:37:11.011457       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 21:37:11.013227       1 config.go:200] "Starting service config controller"
	I1217 21:37:11.013249       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 21:37:11.013266       1 config.go:106] "Starting endpoint slice config controller"
	I1217 21:37:11.013270       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 21:37:11.013280       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 21:37:11.013284       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 21:37:11.013935       1 config.go:309] "Starting node config controller"
	I1217 21:37:11.013946       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 21:37:11.013952       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 21:37:11.113569       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 21:37:11.113605       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 21:37:11.113640       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2a5ce624f1a768e61331b170f438d08748328a6bad10cd0836782475886d0f78] <==
	I1217 21:37:39.729461       1 serving.go:386] Generated self-signed cert in-memory
	I1217 21:37:42.036348       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 21:37:42.036666       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 21:37:42.042865       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 21:37:42.043043       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 21:37:42.043198       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 21:37:42.043003       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1217 21:37:42.043285       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1217 21:37:42.043056       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 21:37:42.046773       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 21:37:42.043069       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 21:37:42.143609       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1217 21:37:42.143974       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 21:37:42.148187       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [dde6c95ebd91853deb5bebbd95070104e1b043bdf297eae1114f0db44dd281ab] <==
	E1217 21:37:02.359611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 21:37:02.359759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 21:37:02.359840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 21:37:02.360026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 21:37:02.360074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 21:37:02.360190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 21:37:02.360240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 21:37:02.360302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 21:37:02.360327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 21:37:03.203016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 21:37:03.235088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 21:37:03.303095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 21:37:03.318560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 21:37:03.477322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 21:37:03.523489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 21:37:03.547036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 21:37:03.600547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1217 21:37:03.616657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1217 21:37:06.616110       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 21:37:27.254749       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1217 21:37:27.254808       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 21:37:27.255844       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1217 21:37:27.255879       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1217 21:37:27.255897       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1217 21:37:27.255913       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 17 21:37:35 pause-918446 kubelet[1353]: E1217 21:37:35.079862    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-7v8zq\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="3f504d98-d9d6-494d-8e63-da19b280fbb4" pod="kube-system/kindnet-7v8zq"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: E1217 21:37:35.080032    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w6lj6\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c0ee899c-83cb-4b55-a66f-7ddad08cb670" pod="kube-system/kube-proxy-w6lj6"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: E1217 21:37:35.080186    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-jtb8l\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1134b539-53f9-4702-a716-ed4285b2123e" pod="kube-system/coredns-66bc5c9577-jtb8l"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: I1217 21:37:35.083650    1353 scope.go:117] "RemoveContainer" containerID="e54d9f4786a1252ba2f1afa2ea94a40c28d6ddec95c5afb31c0369d80e532d7f"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: E1217 21:37:35.084186    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-7v8zq\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="3f504d98-d9d6-494d-8e63-da19b280fbb4" pod="kube-system/kindnet-7v8zq"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: E1217 21:37:35.084375    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w6lj6\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c0ee899c-83cb-4b55-a66f-7ddad08cb670" pod="kube-system/kube-proxy-w6lj6"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: E1217 21:37:35.084550    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-jtb8l\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1134b539-53f9-4702-a716-ed4285b2123e" pod="kube-system/coredns-66bc5c9577-jtb8l"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: E1217 21:37:35.084713    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-918446\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="30d5bf2c1d75f6460c1279bfe1e180cc" pod="kube-system/etcd-pause-918446"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: E1217 21:37:35.084885    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-918446\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="305702fa0014ad925d7452a45ef04fb8" pod="kube-system/kube-apiserver-pause-918446"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: E1217 21:37:35.085045    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-918446\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="f17c397290e51c5b34e50c8d2666f5e5" pod="kube-system/kube-controller-manager-pause-918446"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: E1217 21:37:35.085208    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-918446\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="f92ee7eab4be3269dcb232790487d996" pod="kube-system/kube-scheduler-pause-918446"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: I1217 21:37:35.652596    1353 setters.go:543] "Node became not ready" node="pause-918446" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-17T21:37:35Z","lastTransitionTime":"2025-12-17T21:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"}
	Dec 17 21:37:36 pause-918446 kubelet[1353]: E1217 21:37:36.781696    1353 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized" pod="kube-system/coredns-66bc5c9577-jtb8l" podUID="1134b539-53f9-4702-a716-ed4285b2123e"
	Dec 17 21:37:38 pause-918446 kubelet[1353]: E1217 21:37:38.781069    1353 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized" pod="kube-system/coredns-66bc5c9577-jtb8l" podUID="1134b539-53f9-4702-a716-ed4285b2123e"
	Dec 17 21:37:39 pause-918446 kubelet[1353]: E1217 21:37:39.984674    1353 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-918446\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-918446' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 17 21:37:39 pause-918446 kubelet[1353]: E1217 21:37:39.990559    1353 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-918446\" is forbidden: User \"system:node:pause-918446\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-918446' and this object" podUID="30d5bf2c1d75f6460c1279bfe1e180cc" pod="kube-system/etcd-pause-918446"
	Dec 17 21:37:39 pause-918446 kubelet[1353]: E1217 21:37:39.991028    1353 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-918446\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-918446' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 17 21:37:39 pause-918446 kubelet[1353]: E1217 21:37:39.991166    1353 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-918446\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-918446' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 17 21:37:40 pause-918446 kubelet[1353]: E1217 21:37:40.030462    1353 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-918446\" is forbidden: User \"system:node:pause-918446\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-918446' and this object" podUID="305702fa0014ad925d7452a45ef04fb8" pod="kube-system/kube-apiserver-pause-918446"
	Dec 17 21:37:40 pause-918446 kubelet[1353]: E1217 21:37:40.054709    1353 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-918446\" is forbidden: User \"system:node:pause-918446\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-918446' and this object" podUID="f17c397290e51c5b34e50c8d2666f5e5" pod="kube-system/kube-controller-manager-pause-918446"
	Dec 17 21:37:40 pause-918446 kubelet[1353]: I1217 21:37:40.784475    1353 scope.go:117] "RemoveContainer" containerID="435b580572a8bd0462449f84c02d6482d96d57145cf44d953cdbce897db99730"
	Dec 17 21:37:45 pause-918446 kubelet[1353]: W1217 21:37:45.011945    1353 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 17 21:37:53 pause-918446 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 21:37:53 pause-918446 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 21:37:53 pause-918446 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-918446 -n pause-918446
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-918446 -n pause-918446: exit status 2 (359.862751ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-918446 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-918446
helpers_test.go:244: (dbg) docker inspect pause-918446:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "766e98a0de82b6ee95015f80f40b90eccc0fc14978602887b22236fb87279ff4",
	        "Created": "2025-12-17T21:36:38.459481929Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 703251,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T21:36:38.515561948Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/766e98a0de82b6ee95015f80f40b90eccc0fc14978602887b22236fb87279ff4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/766e98a0de82b6ee95015f80f40b90eccc0fc14978602887b22236fb87279ff4/hostname",
	        "HostsPath": "/var/lib/docker/containers/766e98a0de82b6ee95015f80f40b90eccc0fc14978602887b22236fb87279ff4/hosts",
	        "LogPath": "/var/lib/docker/containers/766e98a0de82b6ee95015f80f40b90eccc0fc14978602887b22236fb87279ff4/766e98a0de82b6ee95015f80f40b90eccc0fc14978602887b22236fb87279ff4-json.log",
	        "Name": "/pause-918446",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-918446:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-918446",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "766e98a0de82b6ee95015f80f40b90eccc0fc14978602887b22236fb87279ff4",
	                "LowerDir": "/var/lib/docker/overlay2/21d0801c712512a71a24af57a9cd384b37544cc71bd3d0a0af70614ea3ee6eb9-init/diff:/var/lib/docker/overlay2/c4759b344ecb109c83d66e4ef56c76903f1d1e597efb86ab2d2c7911b1130a8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/21d0801c712512a71a24af57a9cd384b37544cc71bd3d0a0af70614ea3ee6eb9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/21d0801c712512a71a24af57a9cd384b37544cc71bd3d0a0af70614ea3ee6eb9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/21d0801c712512a71a24af57a9cd384b37544cc71bd3d0a0af70614ea3ee6eb9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-918446",
	                "Source": "/var/lib/docker/volumes/pause-918446/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-918446",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-918446",
	                "name.minikube.sigs.k8s.io": "pause-918446",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5c8bc8db657feaef3c1118ad893bc11b4ae842bc52df16353fc04d5dc0d8dc83",
	            "SandboxKey": "/var/run/docker/netns/5c8bc8db657f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-918446": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:8e:0b:f4:ad:d9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0f7dc9fae7b8c69416a0b9cbb4dd403c9e2ca80554239262bec9161eb4c54a52",
	                    "EndpointID": "4653020f5eac087a210f723e63a3efecacb6a6258a70ad13d496c39e757e67cd",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-918446",
	                        "766e98a0de82"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-918446 -n pause-918446
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-918446 -n pause-918446: exit status 2 (337.628614ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-918446 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-918446 logs -n 25: (1.376230692s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-185508 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                         │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:23 UTC │ 17 Dec 25 21:24 UTC │
	│ start   │ -p missing-upgrade-783783 --memory=3072 --driver=docker  --container-runtime=crio                                                             │ missing-upgrade-783783    │ jenkins │ v1.35.0 │ 17 Dec 25 21:23 UTC │ 17 Dec 25 21:24 UTC │
	│ start   │ -p NoKubernetes-185508 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                         │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:24 UTC │ 17 Dec 25 21:26 UTC │
	│ start   │ -p missing-upgrade-783783 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ missing-upgrade-783783    │ jenkins │ v1.37.0 │ 17 Dec 25 21:24 UTC │ 17 Dec 25 21:25 UTC │
	│ delete  │ -p missing-upgrade-783783                                                                                                                     │ missing-upgrade-783783    │ jenkins │ v1.37.0 │ 17 Dec 25 21:25 UTC │ 17 Dec 25 21:25 UTC │
	│ start   │ -p kubernetes-upgrade-342357 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio      │ kubernetes-upgrade-342357 │ jenkins │ v1.37.0 │ 17 Dec 25 21:25 UTC │ 17 Dec 25 21:25 UTC │
	│ stop    │ -p kubernetes-upgrade-342357                                                                                                                  │ kubernetes-upgrade-342357 │ jenkins │ v1.37.0 │ 17 Dec 25 21:25 UTC │ 17 Dec 25 21:26 UTC │
	│ start   │ -p kubernetes-upgrade-342357 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-342357 │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │                     │
	│ delete  │ -p NoKubernetes-185508                                                                                                                        │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │ 17 Dec 25 21:26 UTC │
	│ start   │ -p NoKubernetes-185508 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                         │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │ 17 Dec 25 21:26 UTC │
	│ ssh     │ -p NoKubernetes-185508 sudo systemctl is-active --quiet service kubelet                                                                       │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │                     │
	│ stop    │ -p NoKubernetes-185508                                                                                                                        │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │ 17 Dec 25 21:26 UTC │
	│ start   │ -p NoKubernetes-185508 --driver=docker  --container-runtime=crio                                                                              │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │ 17 Dec 25 21:26 UTC │
	│ ssh     │ -p NoKubernetes-185508 sudo systemctl is-active --quiet service kubelet                                                                       │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │                     │
	│ delete  │ -p NoKubernetes-185508                                                                                                                        │ NoKubernetes-185508       │ jenkins │ v1.37.0 │ 17 Dec 25 21:26 UTC │ 17 Dec 25 21:26 UTC │
	│ start   │ -p stopped-upgrade-993252 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                          │ stopped-upgrade-993252    │ jenkins │ v1.35.0 │ 17 Dec 25 21:26 UTC │ 17 Dec 25 21:27 UTC │
	│ stop    │ stopped-upgrade-993252 stop                                                                                                                   │ stopped-upgrade-993252    │ jenkins │ v1.35.0 │ 17 Dec 25 21:27 UTC │ 17 Dec 25 21:27 UTC │
	│ start   │ -p stopped-upgrade-993252 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ stopped-upgrade-993252    │ jenkins │ v1.37.0 │ 17 Dec 25 21:27 UTC │ 17 Dec 25 21:31 UTC │
	│ delete  │ -p stopped-upgrade-993252                                                                                                                     │ stopped-upgrade-993252    │ jenkins │ v1.37.0 │ 17 Dec 25 21:31 UTC │ 17 Dec 25 21:31 UTC │
	│ start   │ -p running-upgrade-206976 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                          │ running-upgrade-206976    │ jenkins │ v1.35.0 │ 17 Dec 25 21:31 UTC │ 17 Dec 25 21:32 UTC │
	│ start   │ -p running-upgrade-206976 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ running-upgrade-206976    │ jenkins │ v1.37.0 │ 17 Dec 25 21:32 UTC │ 17 Dec 25 21:36 UTC │
	│ delete  │ -p running-upgrade-206976                                                                                                                     │ running-upgrade-206976    │ jenkins │ v1.37.0 │ 17 Dec 25 21:36 UTC │ 17 Dec 25 21:36 UTC │
	│ start   │ -p pause-918446 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                     │ pause-918446              │ jenkins │ v1.37.0 │ 17 Dec 25 21:36 UTC │ 17 Dec 25 21:37 UTC │
	│ start   │ -p pause-918446 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                              │ pause-918446              │ jenkins │ v1.37.0 │ 17 Dec 25 21:37 UTC │ 17 Dec 25 21:37 UTC │
	│ pause   │ -p pause-918446 --alsologtostderr -v=5                                                                                                        │ pause-918446              │ jenkins │ v1.37.0 │ 17 Dec 25 21:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 21:37:26
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 21:37:26.074742  705673 out.go:360] Setting OutFile to fd 1 ...
	I1217 21:37:26.075277  705673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:37:26.075311  705673 out.go:374] Setting ErrFile to fd 2...
	I1217 21:37:26.075331  705673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:37:26.075883  705673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 21:37:26.076415  705673 out.go:368] Setting JSON to false
	I1217 21:37:26.077503  705673 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15595,"bootTime":1765991851,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 21:37:26.077626  705673 start.go:143] virtualization:  
	I1217 21:37:26.080948  705673 out.go:179] * [pause-918446] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 21:37:26.084193  705673 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 21:37:26.084403  705673 notify.go:221] Checking for updates...
	I1217 21:37:26.091708  705673 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 21:37:26.094792  705673 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 21:37:26.100332  705673 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 21:37:26.103435  705673 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 21:37:26.106289  705673 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 21:37:26.109961  705673 config.go:182] Loaded profile config "pause-918446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 21:37:26.110618  705673 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 21:37:26.147818  705673 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 21:37:26.147985  705673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 21:37:26.209684  705673 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-17 21:37:26.200553189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 21:37:26.209794  705673 docker.go:319] overlay module found
	I1217 21:37:26.212982  705673 out.go:179] * Using the docker driver based on existing profile
	I1217 21:37:26.215900  705673 start.go:309] selected driver: docker
	I1217 21:37:26.215920  705673 start.go:927] validating driver "docker" against &{Name:pause-918446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-918446 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 21:37:26.216054  705673 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 21:37:26.216160  705673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 21:37:26.278118  705673 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-17 21:37:26.268834521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 21:37:26.278562  705673 cni.go:84] Creating CNI manager for ""
	I1217 21:37:26.278615  705673 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 21:37:26.278671  705673 start.go:353] cluster config:
	{Name:pause-918446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-918446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 21:37:26.283739  705673 out.go:179] * Starting "pause-918446" primary control-plane node in "pause-918446" cluster
	I1217 21:37:26.286625  705673 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 21:37:26.289711  705673 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 21:37:26.292702  705673 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 21:37:26.292773  705673 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1217 21:37:26.292788  705673 cache.go:65] Caching tarball of preloaded images
	I1217 21:37:26.292820  705673 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 21:37:26.292872  705673 preload.go:238] Found /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 21:37:26.292882  705673 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 21:37:26.293022  705673 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/config.json ...
	I1217 21:37:26.312986  705673 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 21:37:26.313009  705673 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 21:37:26.313028  705673 cache.go:243] Successfully downloaded all kic artifacts
	I1217 21:37:26.313058  705673 start.go:360] acquireMachinesLock for pause-918446: {Name:mk31914dae1555bb906adecd01310ccb2e7c2ac9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 21:37:26.313124  705673 start.go:364] duration metric: took 43.438µs to acquireMachinesLock for "pause-918446"
	I1217 21:37:26.313146  705673 start.go:96] Skipping create...Using existing machine configuration
	I1217 21:37:26.313157  705673 fix.go:54] fixHost starting: 
	I1217 21:37:26.313418  705673 cli_runner.go:164] Run: docker container inspect pause-918446 --format={{.State.Status}}
	I1217 21:37:26.330121  705673 fix.go:112] recreateIfNeeded on pause-918446: state=Running err=<nil>
	W1217 21:37:26.330155  705673 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 21:37:26.333397  705673 out.go:252] * Updating the running docker "pause-918446" container ...
	I1217 21:37:26.333431  705673 machine.go:94] provisionDockerMachine start ...
	I1217 21:37:26.333533  705673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918446
	I1217 21:37:26.351762  705673 main.go:143] libmachine: Using SSH client type: native
	I1217 21:37:26.352088  705673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1217 21:37:26.352103  705673 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 21:37:26.483028  705673 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-918446
	
	I1217 21:37:26.483055  705673 ubuntu.go:182] provisioning hostname "pause-918446"
	I1217 21:37:26.483129  705673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918446
	I1217 21:37:26.501031  705673 main.go:143] libmachine: Using SSH client type: native
	I1217 21:37:26.501353  705673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1217 21:37:26.501377  705673 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-918446 && echo "pause-918446" | sudo tee /etc/hostname
	I1217 21:37:26.640711  705673 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-918446
	
	I1217 21:37:26.640793  705673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918446
	I1217 21:37:26.660356  705673 main.go:143] libmachine: Using SSH client type: native
	I1217 21:37:26.660671  705673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1217 21:37:26.660693  705673 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-918446' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-918446/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-918446' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 21:37:26.796133  705673 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 21:37:26.796164  705673 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-485134/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-485134/.minikube}
	I1217 21:37:26.796188  705673 ubuntu.go:190] setting up certificates
	I1217 21:37:26.796196  705673 provision.go:84] configureAuth start
	I1217 21:37:26.796261  705673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-918446
	I1217 21:37:26.814447  705673 provision.go:143] copyHostCerts
	I1217 21:37:26.814522  705673 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem, removing ...
	I1217 21:37:26.814531  705673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem
	I1217 21:37:26.814607  705673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/ca.pem (1082 bytes)
	I1217 21:37:26.814714  705673 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem, removing ...
	I1217 21:37:26.814720  705673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem
	I1217 21:37:26.814744  705673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/cert.pem (1123 bytes)
	I1217 21:37:26.814793  705673 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem, removing ...
	I1217 21:37:26.814798  705673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem
	I1217 21:37:26.814819  705673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-485134/.minikube/key.pem (1675 bytes)
	I1217 21:37:26.814863  705673 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem org=jenkins.pause-918446 san=[127.0.0.1 192.168.76.2 localhost minikube pause-918446]
	I1217 21:37:26.920588  705673 provision.go:177] copyRemoteCerts
	I1217 21:37:26.920658  705673 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 21:37:26.920705  705673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918446
	I1217 21:37:26.938513  705673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/pause-918446/id_rsa Username:docker}
	I1217 21:37:27.035588  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 21:37:27.052851  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 21:37:27.070854  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 21:37:27.088283  705673 provision.go:87] duration metric: took 292.064896ms to configureAuth
	I1217 21:37:27.088312  705673 ubuntu.go:206] setting minikube options for container-runtime
	I1217 21:37:27.088583  705673 config.go:182] Loaded profile config "pause-918446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 21:37:27.088697  705673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918446
	I1217 21:37:27.106211  705673 main.go:143] libmachine: Using SSH client type: native
	I1217 21:37:27.106528  705673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1217 21:37:27.106549  705673 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 21:37:32.460162  705673 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 21:37:32.460184  705673 machine.go:97] duration metric: took 6.126745744s to provisionDockerMachine
	I1217 21:37:32.460194  705673 start.go:293] postStartSetup for "pause-918446" (driver="docker")
	I1217 21:37:32.460205  705673 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 21:37:32.460263  705673 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 21:37:32.460299  705673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918446
	I1217 21:37:32.478458  705673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/pause-918446/id_rsa Username:docker}
	I1217 21:37:32.575727  705673 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 21:37:32.579073  705673 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 21:37:32.579109  705673 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 21:37:32.579121  705673 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/addons for local assets ...
	I1217 21:37:32.579175  705673 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-485134/.minikube/files for local assets ...
	I1217 21:37:32.579261  705673 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem -> 4884122.pem in /etc/ssl/certs
	I1217 21:37:32.579362  705673 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 21:37:32.586847  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 21:37:32.604910  705673 start.go:296] duration metric: took 144.700759ms for postStartSetup
	I1217 21:37:32.605003  705673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 21:37:32.605057  705673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918446
	I1217 21:37:32.622648  705673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/pause-918446/id_rsa Username:docker}
	I1217 21:37:32.716928  705673 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 21:37:32.721761  705673 fix.go:56] duration metric: took 6.408597365s for fixHost
	I1217 21:37:32.721794  705673 start.go:83] releasing machines lock for "pause-918446", held for 6.408653021s
	I1217 21:37:32.721871  705673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-918446
	I1217 21:37:32.738219  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 21:37:32.738276  705673 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 21:37:32.738285  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 21:37:32.738315  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 21:37:32.738397  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 21:37:32.738430  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 21:37:32.738480  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 21:37:32.738551  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 21:37:32.738604  705673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-918446
	I1217 21:37:32.755283  705673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/pause-918446/id_rsa Username:docker}
	I1217 21:37:32.862682  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 21:37:32.880415  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 21:37:32.898787  705673 ssh_runner.go:195] Run: openssl version
	I1217 21:37:32.905580  705673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:37:32.913224  705673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 21:37:32.920775  705673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:37:32.924976  705673 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:37:32.925064  705673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:37:32.968258  705673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 21:37:32.976064  705673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 21:37:32.983385  705673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 21:37:32.990733  705673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 21:37:32.994684  705673 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 21:37:32.994797  705673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 21:37:33.041049  705673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 21:37:33.048954  705673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 21:37:33.056188  705673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 21:37:33.064060  705673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 21:37:33.067809  705673 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 21:37:33.067881  705673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 21:37:33.109462  705673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 21:37:33.117034  705673 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 21:37:33.120804  705673 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 21:37:33.124612  705673 ssh_runner.go:195] Run: cat /version.json
	I1217 21:37:33.124725  705673 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 21:37:33.215080  705673 ssh_runner.go:195] Run: systemctl --version
	I1217 21:37:33.221700  705673 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 21:37:33.266897  705673 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 21:37:33.271327  705673 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 21:37:33.271466  705673 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 21:37:33.280665  705673 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 21:37:33.280694  705673 start.go:496] detecting cgroup driver to use...
	I1217 21:37:33.280738  705673 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 21:37:33.280817  705673 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 21:37:33.296453  705673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 21:37:33.309895  705673 docker.go:218] disabling cri-docker service (if available) ...
	I1217 21:37:33.310011  705673 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 21:37:33.325353  705673 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 21:37:33.338446  705673 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 21:37:33.478397  705673 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 21:37:33.623449  705673 docker.go:234] disabling docker service ...
	I1217 21:37:33.623559  705673 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 21:37:33.639648  705673 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 21:37:33.652744  705673 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 21:37:33.790771  705673 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 21:37:33.926157  705673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 21:37:33.939492  705673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 21:37:33.954734  705673 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 21:37:33.954802  705673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:37:33.964683  705673 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 21:37:33.964746  705673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:37:33.973796  705673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:37:33.982965  705673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:37:33.991843  705673 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 21:37:34.001986  705673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:37:34.012941  705673 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:37:34.022348  705673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 21:37:34.032191  705673 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 21:37:34.040857  705673 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 21:37:34.049084  705673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 21:37:34.180369  705673 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 21:37:34.965607  705673 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 21:37:34.965728  705673 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 21:37:34.970148  705673 start.go:564] Will wait 60s for crictl version
	I1217 21:37:34.970261  705673 ssh_runner.go:195] Run: which crictl
	I1217 21:37:34.975549  705673 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 21:37:35.018685  705673 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 21:37:35.018837  705673 ssh_runner.go:195] Run: crio --version
	I1217 21:37:35.079723  705673 ssh_runner.go:195] Run: crio --version
	I1217 21:37:35.169373  705673 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 21:37:35.173460  705673 cli_runner.go:164] Run: docker network inspect pause-918446 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 21:37:35.190806  705673 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 21:37:35.199093  705673 kubeadm.go:884] updating cluster {Name:pause-918446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-918446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 21:37:35.199235  705673 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 21:37:35.199284  705673 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 21:37:35.261122  705673 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 21:37:35.261141  705673 crio.go:433] Images already preloaded, skipping extraction
	I1217 21:37:35.261198  705673 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 21:37:35.321080  705673 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 21:37:35.321100  705673 cache_images.go:86] Images are preloaded, skipping loading
	I1217 21:37:35.321107  705673 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 crio true true} ...
	I1217 21:37:35.321214  705673 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-918446 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-918446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 21:37:35.321296  705673 ssh_runner.go:195] Run: crio config
	I1217 21:37:35.444147  705673 cni.go:84] Creating CNI manager for ""
	I1217 21:37:35.444219  705673 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 21:37:35.444246  705673 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 21:37:35.444299  705673 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-918446 NodeName:pause-918446 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 21:37:35.444477  705673 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-918446"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 21:37:35.444591  705673 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 21:37:35.459662  705673 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 21:37:35.459796  705673 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 21:37:35.468446  705673 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1217 21:37:35.489674  705673 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 21:37:35.508359  705673 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1217 21:37:35.527943  705673 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 21:37:35.534486  705673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 21:37:35.764213  705673 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 21:37:35.781868  705673 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446 for IP: 192.168.76.2
	I1217 21:37:35.781887  705673 certs.go:195] generating shared ca certs ...
	I1217 21:37:35.781903  705673 certs.go:227] acquiring lock for ca certs: {Name:mk2b1bd9fa0b029b02dd38a7b1b08717d4426eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 21:37:35.782052  705673 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key
	I1217 21:37:35.782106  705673 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key
	I1217 21:37:35.782113  705673 certs.go:257] generating profile certs ...
	I1217 21:37:35.782201  705673 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/client.key
	I1217 21:37:35.782271  705673 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/apiserver.key.3381b907
	I1217 21:37:35.782312  705673 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/proxy-client.key
	I1217 21:37:35.782431  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem (1338 bytes)
	W1217 21:37:35.782465  705673 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412_empty.pem, impossibly tiny 0 bytes
	I1217 21:37:35.782474  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 21:37:35.782503  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/ca.pem (1082 bytes)
	I1217 21:37:35.782525  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/cert.pem (1123 bytes)
	I1217 21:37:35.782547  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/certs/key.pem (1675 bytes)
	I1217 21:37:35.782591  705673 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem (1708 bytes)
	I1217 21:37:35.783201  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 21:37:35.815831  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 21:37:35.890151  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 21:37:35.935646  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 21:37:35.967614  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 21:37:35.995317  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 21:37:36.028886  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 21:37:36.057940  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 21:37:36.106214  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/certs/488412.pem --> /usr/share/ca-certificates/488412.pem (1338 bytes)
	I1217 21:37:36.152026  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/ssl/certs/4884122.pem --> /usr/share/ca-certificates/4884122.pem (1708 bytes)
	I1217 21:37:36.191210  705673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 21:37:36.220838  705673 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 21:37:36.245208  705673 ssh_runner.go:195] Run: openssl version
	I1217 21:37:36.253382  705673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:37:36.264747  705673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 21:37:36.276949  705673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:37:36.280764  705673 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:37:36.280827  705673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 21:37:36.352971  705673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 21:37:36.365321  705673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/488412.pem
	I1217 21:37:36.377885  705673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/488412.pem /etc/ssl/certs/488412.pem
	I1217 21:37:36.390489  705673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/488412.pem
	I1217 21:37:36.394887  705673 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:21 /usr/share/ca-certificates/488412.pem
	I1217 21:37:36.395005  705673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/488412.pem
	I1217 21:37:36.441075  705673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 21:37:36.449659  705673 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4884122.pem
	I1217 21:37:36.457505  705673 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4884122.pem /etc/ssl/certs/4884122.pem
	I1217 21:37:36.465736  705673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4884122.pem
	I1217 21:37:36.469932  705673 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:21 /usr/share/ca-certificates/4884122.pem
	I1217 21:37:36.470053  705673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4884122.pem
	I1217 21:37:36.518142  705673 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 21:37:36.526780  705673 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 21:37:36.531440  705673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 21:37:36.581700  705673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 21:37:36.649082  705673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 21:37:36.703117  705673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 21:37:36.749423  705673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 21:37:36.808266  705673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 21:37:36.868122  705673 kubeadm.go:401] StartCluster: {Name:pause-918446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-918446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 21:37:36.868320  705673 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 21:37:36.868419  705673 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 21:37:36.910077  705673 cri.go:89] found id: "13b62765e940b707f2109f867ff607b28ee2b00bc649622df13dd82104b94781"
	I1217 21:37:36.910150  705673 cri.go:89] found id: "2a5ce624f1a768e61331b170f438d08748328a6bad10cd0836782475886d0f78"
	I1217 21:37:36.910169  705673 cri.go:89] found id: "730d2f116ff8547a48b3b177ef0205a4285f5b1a2d27d3af70f6c52dce002c86"
	I1217 21:37:36.910186  705673 cri.go:89] found id: "8a445923145e34da32a48263e4db5c4034994c139614d9318bf0059d4f765b78"
	I1217 21:37:36.910220  705673 cri.go:89] found id: "dea8eca7161c61cb894081fc18ba1333ebfce11ca17673be95f8ae09baa586f7"
	I1217 21:37:36.910242  705673 cri.go:89] found id: "1cca7e7499270d5a13c177bfb97573b991962b599df6a6ce2260aed393abbb0d"
	I1217 21:37:36.910261  705673 cri.go:89] found id: "435b580572a8bd0462449f84c02d6482d96d57145cf44d953cdbce897db99730"
	I1217 21:37:36.910280  705673 cri.go:89] found id: "1eb61928f5d77c91d1c42c08faf26efa6f642b7bbeb923ce2dd2d46594c88b3d"
	I1217 21:37:36.910309  705673 cri.go:89] found id: "eecdfd5180ffaad58834ff194e5867c5f6eabf3b2f43f4e5c424692c8376e31c"
	I1217 21:37:36.910336  705673 cri.go:89] found id: "776d553a4c8267d516c1d7cd7f0f211d2fd8a6fbb10a89792e1dab3050e69a60"
	I1217 21:37:36.910356  705673 cri.go:89] found id: "e54d9f4786a1252ba2f1afa2ea94a40c28d6ddec95c5afb31c0369d80e532d7f"
	I1217 21:37:36.910390  705673 cri.go:89] found id: "d2cb624370de704a2f2fb8a42c2d5a2a132c9c0f57353890ebc7ffa8d923605a"
	I1217 21:37:36.910414  705673 cri.go:89] found id: "dde6c95ebd91853deb5bebbd95070104e1b043bdf297eae1114f0db44dd281ab"
	I1217 21:37:36.910433  705673 cri.go:89] found id: ""
	I1217 21:37:36.910517  705673 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 21:37:36.928245  705673 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T21:37:36Z" level=error msg="open /run/runc: no such file or directory"
	I1217 21:37:36.928323  705673 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 21:37:36.940602  705673 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 21:37:36.940671  705673 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 21:37:36.940753  705673 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 21:37:36.952681  705673 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 21:37:36.953453  705673 kubeconfig.go:125] found "pause-918446" server: "https://192.168.76.2:8443"
	I1217 21:37:36.954423  705673 kapi.go:59] client config for pause-918446: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 21:37:36.955225  705673 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 21:37:36.955346  705673 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 21:37:36.955374  705673 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 21:37:36.955393  705673 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 21:37:36.955434  705673 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 21:37:36.955899  705673 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 21:37:36.965512  705673 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1217 21:37:36.965546  705673 kubeadm.go:602] duration metric: took 24.855696ms to restartPrimaryControlPlane
	I1217 21:37:36.965556  705673 kubeadm.go:403] duration metric: took 97.444781ms to StartCluster
	I1217 21:37:36.965571  705673 settings.go:142] acquiring lock: {Name:mk91fac3a58d91b836cd0701183ef7cc1e571672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 21:37:36.965646  705673 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 21:37:36.966512  705673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/kubeconfig: {Name:mkc7214551fa855d6d4575d627c3ecd54291b351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 21:37:36.966760  705673 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 21:37:36.966962  705673 config.go:182] Loaded profile config "pause-918446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 21:37:36.967010  705673 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 21:37:36.970931  705673 out.go:179] * Verifying Kubernetes components...
	I1217 21:37:36.970931  705673 out.go:179] * Enabled addons: 
	I1217 21:37:36.973772  705673 addons.go:530] duration metric: took 6.75825ms for enable addons: enabled=[]
	I1217 21:37:36.973817  705673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 21:37:37.198585  705673 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 21:37:37.213391  705673 node_ready.go:35] waiting up to 6m0s for node "pause-918446" to be "Ready" ...
	I1217 21:37:40.065155  705673 node_ready.go:49] node "pause-918446" is "Ready"
	I1217 21:37:40.065243  705673 node_ready.go:38] duration metric: took 2.851823357s for node "pause-918446" to be "Ready" ...
	I1217 21:37:40.065273  705673 api_server.go:52] waiting for apiserver process to appear ...
	I1217 21:37:40.065384  705673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:37:40.081131  705673 api_server.go:72] duration metric: took 3.114337649s to wait for apiserver process to appear ...
	I1217 21:37:40.081159  705673 api_server.go:88] waiting for apiserver healthz status ...
	I1217 21:37:40.081180  705673 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 21:37:40.102321  705673 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 21:37:40.102355  705673 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 21:37:40.581994  705673 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 21:37:40.594977  705673 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 21:37:40.595008  705673 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 21:37:41.081990  705673 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 21:37:41.091477  705673 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 21:37:41.091505  705673 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 21:37:41.582156  705673 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 21:37:41.591975  705673 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1217 21:37:41.593124  705673 api_server.go:141] control plane version: v1.34.3
	I1217 21:37:41.593146  705673 api_server.go:131] duration metric: took 1.511979775s to wait for apiserver health ...
	I1217 21:37:41.593155  705673 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 21:37:41.596774  705673 system_pods.go:59] 7 kube-system pods found
	I1217 21:37:41.596802  705673 system_pods.go:61] "coredns-66bc5c9577-jtb8l" [1134b539-53f9-4702-a716-ed4285b2123e] Running
	I1217 21:37:41.596811  705673 system_pods.go:61] "etcd-pause-918446" [da32e969-69c3-4d9a-8e22-1066dc76312d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 21:37:41.596829  705673 system_pods.go:61] "kindnet-7v8zq" [3f504d98-d9d6-494d-8e63-da19b280fbb4] Running
	I1217 21:37:41.596843  705673 system_pods.go:61] "kube-apiserver-pause-918446" [7b6f7d5a-18f9-40c1-b1d1-01e98ca5a8db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 21:37:41.596851  705673 system_pods.go:61] "kube-controller-manager-pause-918446" [49ba563b-1e62-4e32-9838-11a97e13107f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 21:37:41.596856  705673 system_pods.go:61] "kube-proxy-w6lj6" [c0ee899c-83cb-4b55-a66f-7ddad08cb670] Running
	I1217 21:37:41.596861  705673 system_pods.go:61] "kube-scheduler-pause-918446" [85095cb2-ddfb-463b-be8f-ae8f0e29ab69] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 21:37:41.596867  705673 system_pods.go:74] duration metric: took 3.706369ms to wait for pod list to return data ...
	I1217 21:37:41.596874  705673 default_sa.go:34] waiting for default service account to be created ...
	I1217 21:37:41.599423  705673 default_sa.go:45] found service account: "default"
	I1217 21:37:41.599442  705673 default_sa.go:55] duration metric: took 2.561531ms for default service account to be created ...
	I1217 21:37:41.599450  705673 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 21:37:41.602858  705673 system_pods.go:86] 7 kube-system pods found
	I1217 21:37:41.602926  705673 system_pods.go:89] "coredns-66bc5c9577-jtb8l" [1134b539-53f9-4702-a716-ed4285b2123e] Running
	I1217 21:37:41.602952  705673 system_pods.go:89] "etcd-pause-918446" [da32e969-69c3-4d9a-8e22-1066dc76312d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 21:37:41.602973  705673 system_pods.go:89] "kindnet-7v8zq" [3f504d98-d9d6-494d-8e63-da19b280fbb4] Running
	I1217 21:37:41.603013  705673 system_pods.go:89] "kube-apiserver-pause-918446" [7b6f7d5a-18f9-40c1-b1d1-01e98ca5a8db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 21:37:41.603042  705673 system_pods.go:89] "kube-controller-manager-pause-918446" [49ba563b-1e62-4e32-9838-11a97e13107f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 21:37:41.603071  705673 system_pods.go:89] "kube-proxy-w6lj6" [c0ee899c-83cb-4b55-a66f-7ddad08cb670] Running
	I1217 21:37:41.603112  705673 system_pods.go:89] "kube-scheduler-pause-918446" [85095cb2-ddfb-463b-be8f-ae8f0e29ab69] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 21:37:41.603140  705673 system_pods.go:126] duration metric: took 3.676559ms to wait for k8s-apps to be running ...
	I1217 21:37:41.603174  705673 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 21:37:41.603269  705673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 21:37:41.617951  705673 system_svc.go:56] duration metric: took 14.755913ms WaitForService to wait for kubelet
	I1217 21:37:41.618033  705673 kubeadm.go:587] duration metric: took 4.651245186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 21:37:41.618069  705673 node_conditions.go:102] verifying NodePressure condition ...
	I1217 21:37:41.622060  705673 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 21:37:41.622149  705673 node_conditions.go:123] node cpu capacity is 2
	I1217 21:37:41.622177  705673 node_conditions.go:105] duration metric: took 4.090011ms to run NodePressure ...
	I1217 21:37:41.622203  705673 start.go:242] waiting for startup goroutines ...
	I1217 21:37:41.622238  705673 start.go:247] waiting for cluster config update ...
	I1217 21:37:41.622265  705673 start.go:256] writing updated cluster config ...
	I1217 21:37:41.622641  705673 ssh_runner.go:195] Run: rm -f paused
	I1217 21:37:41.626955  705673 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 21:37:41.627786  705673 kapi.go:59] client config for pause-918446: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/profiles/pause-918446/client.key", CAFile:"/home/jenkins/minikube-integration/21808-485134/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 21:37:41.631276  705673 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jtb8l" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 21:37:43.636539  705673 pod_ready.go:104] pod "coredns-66bc5c9577-jtb8l" is not "Ready", error: node "pause-918446" hosting pod "coredns-66bc5c9577-jtb8l" is not "Ready" (will retry)
	W1217 21:37:45.636694  705673 pod_ready.go:104] pod "coredns-66bc5c9577-jtb8l" is not "Ready", error: node "pause-918446" hosting pod "coredns-66bc5c9577-jtb8l" is not "Ready" (will retry)
	W1217 21:37:47.637044  705673 pod_ready.go:104] pod "coredns-66bc5c9577-jtb8l" is not "Ready", error: node "pause-918446" hosting pod "coredns-66bc5c9577-jtb8l" is not "Ready" (will retry)
	W1217 21:37:49.637244  705673 pod_ready.go:104] pod "coredns-66bc5c9577-jtb8l" is not "Ready", error: node "pause-918446" hosting pod "coredns-66bc5c9577-jtb8l" is not "Ready" (will retry)
	I1217 21:37:50.636549  705673 pod_ready.go:94] pod "coredns-66bc5c9577-jtb8l" is "Ready"
	I1217 21:37:50.636578  705673 pod_ready.go:86] duration metric: took 9.005234069s for pod "coredns-66bc5c9577-jtb8l" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:50.639347  705673 pod_ready.go:83] waiting for pod "etcd-pause-918446" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:50.643776  705673 pod_ready.go:94] pod "etcd-pause-918446" is "Ready"
	I1217 21:37:50.643809  705673 pod_ready.go:86] duration metric: took 4.437386ms for pod "etcd-pause-918446" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:50.646492  705673 pod_ready.go:83] waiting for pod "kube-apiserver-pause-918446" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:50.651524  705673 pod_ready.go:94] pod "kube-apiserver-pause-918446" is "Ready"
	I1217 21:37:50.651556  705673 pod_ready.go:86] duration metric: took 5.036719ms for pod "kube-apiserver-pause-918446" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:50.654254  705673 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-918446" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:51.659568  705673 pod_ready.go:94] pod "kube-controller-manager-pause-918446" is "Ready"
	I1217 21:37:51.659627  705673 pod_ready.go:86] duration metric: took 1.005347445s for pod "kube-controller-manager-pause-918446" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:51.834608  705673 pod_ready.go:83] waiting for pod "kube-proxy-w6lj6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:52.234688  705673 pod_ready.go:94] pod "kube-proxy-w6lj6" is "Ready"
	I1217 21:37:52.234719  705673 pod_ready.go:86] duration metric: took 400.08369ms for pod "kube-proxy-w6lj6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:52.434992  705673 pod_ready.go:83] waiting for pod "kube-scheduler-pause-918446" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:52.834078  705673 pod_ready.go:94] pod "kube-scheduler-pause-918446" is "Ready"
	I1217 21:37:52.834104  705673 pod_ready.go:86] duration metric: took 399.085513ms for pod "kube-scheduler-pause-918446" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 21:37:52.834116  705673 pod_ready.go:40] duration metric: took 11.207082926s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 21:37:52.888091  705673 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1217 21:37:52.891233  705673 out.go:179] * Done! kubectl is now configured to use "pause-918446" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 21:37:35 pause-918446 crio[2183]: time="2025-12-17T21:37:35.357610021Z" level=info msg="Started container" PID=2305 containerID=2a5ce624f1a768e61331b170f438d08748328a6bad10cd0836782475886d0f78 description=kube-system/kube-scheduler-pause-918446/kube-scheduler id=eda0c2b7-be52-4337-9e24-dd4c2454dd6f name=/runtime.v1.RuntimeService/StartContainer sandboxID=fc567db1c47d4e1fb7dcf06c461d88b990ae3e2b6bfd9af1e10f924817b674e3
	Dec 17 21:37:35 pause-918446 crio[2183]: time="2025-12-17T21:37:35.360481955Z" level=info msg="Created container 13b62765e940b707f2109f867ff607b28ee2b00bc649622df13dd82104b94781: kube-system/etcd-pause-918446/etcd" id=07adadd2-5442-47f0-9592-3445cf6555b2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 21:37:35 pause-918446 crio[2183]: time="2025-12-17T21:37:35.361296698Z" level=info msg="Starting container: 13b62765e940b707f2109f867ff607b28ee2b00bc649622df13dd82104b94781" id=4b21826a-f2c0-4fbb-ba5c-7a5ba6594268 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 21:37:35 pause-918446 crio[2183]: time="2025-12-17T21:37:35.370509967Z" level=info msg="Started container" PID=2318 containerID=13b62765e940b707f2109f867ff607b28ee2b00bc649622df13dd82104b94781 description=kube-system/etcd-pause-918446/etcd id=4b21826a-f2c0-4fbb-ba5c-7a5ba6594268 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a34c863f628eb30c38a9b70f525e1180e9949aeb6e7ac06e0e3855a903a26f4
	Dec 17 21:37:40 pause-918446 crio[2183]: time="2025-12-17T21:37:40.785583247Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=a24a9e48-4574-4186-8b68-462d971281c4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 21:37:40 pause-918446 crio[2183]: time="2025-12-17T21:37:40.789140601Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=6181d630-8f31-431c-a7dd-172199bd81d0 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 21:37:40 pause-918446 crio[2183]: time="2025-12-17T21:37:40.792452284Z" level=info msg="Creating container: kube-system/coredns-66bc5c9577-jtb8l/coredns" id=aa91197a-89a5-45de-bf69-c8eaf95727ed name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 21:37:40 pause-918446 crio[2183]: time="2025-12-17T21:37:40.792644614Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 21:37:40 pause-918446 crio[2183]: time="2025-12-17T21:37:40.80394595Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 21:37:40 pause-918446 crio[2183]: time="2025-12-17T21:37:40.805204282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 21:37:40 pause-918446 crio[2183]: time="2025-12-17T21:37:40.847072295Z" level=info msg="Created container 715a5b22a17b4ea89d5c5df3718aaa8b543f9ef2d94f5b5ca36d804ac9cbc1e2: kube-system/coredns-66bc5c9577-jtb8l/coredns" id=aa91197a-89a5-45de-bf69-c8eaf95727ed name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 21:37:40 pause-918446 crio[2183]: time="2025-12-17T21:37:40.848789413Z" level=info msg="Starting container: 715a5b22a17b4ea89d5c5df3718aaa8b543f9ef2d94f5b5ca36d804ac9cbc1e2" id=223854ae-ff90-485e-b193-226efe8fe3f0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 21:37:40 pause-918446 crio[2183]: time="2025-12-17T21:37:40.850939215Z" level=info msg="Started container" PID=2650 containerID=715a5b22a17b4ea89d5c5df3718aaa8b543f9ef2d94f5b5ca36d804ac9cbc1e2 description=kube-system/coredns-66bc5c9577-jtb8l/coredns id=223854ae-ff90-485e-b193-226efe8fe3f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ad462819e64e34ed679a1204068ecfb5cdbfb8fff6ae03b8c7bd71c5edc4de25
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.536306065Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.539849331Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.539884491Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.539908934Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.543136907Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.54317324Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.543199726Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.546418994Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.546454793Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.546479097Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.549812884Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 21:37:45 pause-918446 crio[2183]: time="2025-12-17T21:37:45.549858768Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	715a5b22a17b4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                     17 seconds ago      Running             coredns                   1                   ad462819e64e3       coredns-66bc5c9577-jtb8l               kube-system
	13b62765e940b       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                     22 seconds ago      Running             etcd                      1                   3a34c863f628e       etcd-pause-918446                      kube-system
	2a5ce624f1a76       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                     22 seconds ago      Running             kube-scheduler            1                   fc567db1c47d4       kube-scheduler-pause-918446            kube-system
	730d2f116ff85       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                     22 seconds ago      Running             kube-apiserver            1                   bf6bf1e772ec9       kube-apiserver-pause-918446            kube-system
	8a445923145e3       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                     22 seconds ago      Running             kube-controller-manager   1                   ffd31e7a7fc83       kube-controller-manager-pause-918446   kube-system
	dea8eca7161c6       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     22 seconds ago      Running             kindnet-cni               1                   cf3fe7d1c52fd       kindnet-7v8zq                          kube-system
	1cca7e7499270       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                     22 seconds ago      Running             kube-proxy                1                   2f68041d0ad67       kube-proxy-w6lj6                       kube-system
	435b580572a8b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                     34 seconds ago      Exited              coredns                   0                   ad462819e64e3       coredns-66bc5c9577-jtb8l               kube-system
	1eb61928f5d77       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   45 seconds ago      Exited              kindnet-cni               0                   cf3fe7d1c52fd       kindnet-7v8zq                          kube-system
	eecdfd5180ffa       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                     47 seconds ago      Exited              kube-proxy                0                   2f68041d0ad67       kube-proxy-w6lj6                       kube-system
	776d553a4c826       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                     59 seconds ago      Exited              kube-apiserver            0                   bf6bf1e772ec9       kube-apiserver-pause-918446            kube-system
	e54d9f4786a12       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                     59 seconds ago      Exited              etcd                      0                   3a34c863f628e       etcd-pause-918446                      kube-system
	d2cb624370de7       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                     59 seconds ago      Exited              kube-controller-manager   0                   ffd31e7a7fc83       kube-controller-manager-pause-918446   kube-system
	dde6c95ebd918       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                     59 seconds ago      Exited              kube-scheduler            0                   fc567db1c47d4       kube-scheduler-pause-918446            kube-system
	
	
	==> coredns [435b580572a8bd0462449f84c02d6482d96d57145cf44d953cdbce897db99730] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59303 - 57484 "HINFO IN 3280441896193278699.7977930218880289109. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013505613s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [715a5b22a17b4ea89d5c5df3718aaa8b543f9ef2d94f5b5ca36d804ac9cbc1e2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46918 - 936 "HINFO IN 2769384511658514303.6415519256184776346. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016222386s
	
	
	==> describe nodes <==
	Name:               pause-918446
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-918446
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=pause-918446
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T21_37_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 21:37:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-918446
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 21:37:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 21:37:50 +0000   Wed, 17 Dec 2025 21:36:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 21:37:50 +0000   Wed, 17 Dec 2025 21:36:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 21:37:50 +0000   Wed, 17 Dec 2025 21:36:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 21:37:50 +0000   Wed, 17 Dec 2025 21:37:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-918446
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                dff8bef5-c1ae-4015-acdc-8ca26dcdb9b8
	  Boot ID:                    7e62835b-3b3c-4293-85c7-fb0aa46e4d00
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-jtb8l                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     48s
	  kube-system                 etcd-pause-918446                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         53s
	  kube-system                 kindnet-7v8zq                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      48s
	  kube-system                 kube-apiserver-pause-918446             250m (12%)    0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 kube-controller-manager-pause-918446    200m (10%)    0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 kube-proxy-w6lj6                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-scheduler-pause-918446             100m (5%)     0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 47s                kube-proxy       
	  Normal   Starting                 17s                kube-proxy       
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node pause-918446 status is now: NodeHasSufficientMemory
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node pause-918446 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node pause-918446 status is now: NodeHasSufficientPID
	  Normal   Starting                 54s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 54s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  53s                kubelet          Node pause-918446 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    53s                kubelet          Node pause-918446 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     53s                kubelet          Node pause-918446 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                node-controller  Node pause-918446 event: Registered Node pause-918446 in Controller
	  Normal   NodeNotReady             23s                kubelet          Node pause-918446 status is now: NodeNotReady
	  Normal   RegisteredNode           15s                node-controller  Node pause-918446 event: Registered Node pause-918446 in Controller
	  Normal   NodeReady                8s (x2 over 35s)   kubelet          Node pause-918446 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec17 20:59] overlayfs: idmapped layers are currently not supported
	[Dec17 21:00] overlayfs: idmapped layers are currently not supported
	[Dec17 21:04] overlayfs: idmapped layers are currently not supported
	[  +3.938873] overlayfs: idmapped layers are currently not supported
	[Dec17 21:05] overlayfs: idmapped layers are currently not supported
	[Dec17 21:06] overlayfs: idmapped layers are currently not supported
	[Dec17 21:08] overlayfs: idmapped layers are currently not supported
	[Dec17 21:12] overlayfs: idmapped layers are currently not supported
	[Dec17 21:13] overlayfs: idmapped layers are currently not supported
	[Dec17 21:14] overlayfs: idmapped layers are currently not supported
	[ +43.653071] overlayfs: idmapped layers are currently not supported
	[Dec17 21:15] overlayfs: idmapped layers are currently not supported
	[Dec17 21:16] overlayfs: idmapped layers are currently not supported
	[Dec17 21:17] overlayfs: idmapped layers are currently not supported
	[  +0.555481] overlayfs: idmapped layers are currently not supported
	[Dec17 21:18] overlayfs: idmapped layers are currently not supported
	[ +18.618704] overlayfs: idmapped layers are currently not supported
	[Dec17 21:19] overlayfs: idmapped layers are currently not supported
	[ +26.163757] overlayfs: idmapped layers are currently not supported
	[Dec17 21:20] overlayfs: idmapped layers are currently not supported
	[Dec17 21:21] kauditd_printk_skb: 8 callbacks suppressed
	[ +22.921341] overlayfs: idmapped layers are currently not supported
	[Dec17 21:24] overlayfs: idmapped layers are currently not supported
	[Dec17 21:25] overlayfs: idmapped layers are currently not supported
	[Dec17 21:36] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [13b62765e940b707f2109f867ff607b28ee2b00bc649622df13dd82104b94781] <==
	{"level":"warn","ts":"2025-12-17T21:37:38.395012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.408031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.431153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.452223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.477492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.488367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.521991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.526495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.543272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.557326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.628459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.646729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.669196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.682228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.708265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.732272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.750113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.792624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.794183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.800611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.828720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.861056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.876852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.900363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:38.966580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42810","server-name":"","error":"EOF"}
	
	
	==> etcd [e54d9f4786a1252ba2f1afa2ea94a40c28d6ddec95c5afb31c0369d80e532d7f] <==
	{"level":"warn","ts":"2025-12-17T21:37:01.371405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:01.384561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:01.408845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:01.427178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:01.442279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:01.464828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T21:37:01.529270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59260","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T21:37:27.256878Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-17T21:37:27.256954Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-918446","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-12-17T21:37:27.257070Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T21:37:27.538348Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T21:37:27.538443Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T21:37:27.538466Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-12-17T21:37:27.538497Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-17T21:37:27.538599Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-17T21:37:27.538640Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T21:37:27.538662Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T21:37:27.538673Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-17T21:37:27.538587Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T21:37:27.538687Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T21:37:27.538695Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T21:37:27.541929Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-12-17T21:37:27.542028Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T21:37:27.542114Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-17T21:37:27.542143Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-918446","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 21:37:58 up  4:20,  0 user,  load average: 2.14, 1.54, 1.74
	Linux pause-918446 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1eb61928f5d77c91d1c42c08faf26efa6f642b7bbeb923ce2dd2d46594c88b3d] <==
	I1217 21:37:12.513394       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 21:37:12.514591       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1217 21:37:12.514771       1 main.go:148] setting mtu 1500 for CNI 
	I1217 21:37:12.514791       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 21:37:12.514805       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T21:37:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 21:37:12.719240       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 21:37:12.724031       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 21:37:12.725393       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 21:37:12.725554       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 21:37:12.913665       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 21:37:12.917673       1 metrics.go:72] Registering metrics
	I1217 21:37:12.917844       1 controller.go:711] "Syncing nftables rules"
	I1217 21:37:22.723715       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 21:37:22.723787       1 main.go:301] handling current node
	
	
	==> kindnet [dea8eca7161c61cb894081fc18ba1333ebfce11ca17673be95f8ae09baa586f7] <==
	I1217 21:37:35.328461       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 21:37:35.328655       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1217 21:37:35.328787       1 main.go:148] setting mtu 1500 for CNI 
	I1217 21:37:35.328799       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 21:37:35.328809       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T21:37:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 21:37:35.535248       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 21:37:35.535514       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 21:37:35.535562       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 21:37:35.553766       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1217 21:37:35.554744       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1217 21:37:40.159685       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 21:37:40.159779       1 metrics.go:72] Registering metrics
	I1217 21:37:40.159867       1 controller.go:711] "Syncing nftables rules"
	I1217 21:37:45.535861       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 21:37:45.535944       1 main.go:301] handling current node
	I1217 21:37:55.536022       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 21:37:55.536066       1 main.go:301] handling current node
	
	
	==> kube-apiserver [730d2f116ff8547a48b3b177ef0205a4285f5b1a2d27d3af70f6c52dce002c86] <==
	I1217 21:37:40.078251       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 21:37:40.099708       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 21:37:40.106392       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 21:37:40.106471       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 21:37:40.118742       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 21:37:40.119020       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 21:37:40.119151       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1217 21:37:40.119480       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 21:37:40.119677       1 aggregator.go:171] initial CRD sync complete...
	I1217 21:37:40.119719       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 21:37:40.119748       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 21:37:40.119777       1 cache.go:39] Caches are synced for autoregister controller
	I1217 21:37:40.126544       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 21:37:40.131643       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 21:37:40.139181       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 21:37:40.139647       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 21:37:40.151820       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1217 21:37:40.158738       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 21:37:40.713086       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 21:37:42.073599       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 21:37:43.685117       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 21:37:43.736672       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 21:37:43.785069       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 21:37:43.836360       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 21:37:43.939509       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [776d553a4c8267d516c1d7cd7f0f211d2fd8a6fbb10a89792e1dab3050e69a60] <==
	W1217 21:37:27.282039       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.282116       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.282219       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.282319       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.282401       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.282486       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.285747       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.285910       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286154       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286219       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286277       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286333       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286389       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286440       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286492       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286548       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286602       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286658       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286714       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286785       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286848       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286911       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.286965       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.287294       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 21:37:27.287377       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [8a445923145e34da32a48263e4db5c4034994c139614d9318bf0059d4f765b78] <==
	I1217 21:37:43.460830       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 21:37:43.462288       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 21:37:43.465704       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 21:37:43.468039       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 21:37:43.478579       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 21:37:43.478588       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 21:37:43.478606       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 21:37:43.478621       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 21:37:43.478636       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 21:37:43.478650       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 21:37:43.478662       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 21:37:43.479680       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 21:37:43.479737       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 21:37:43.480952       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1217 21:37:43.486181       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1217 21:37:43.486307       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1217 21:37:43.486390       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-918446"
	I1217 21:37:43.486441       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1217 21:37:43.491710       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1217 21:37:43.491760       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 21:37:43.496014       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 21:37:43.501341       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 21:37:43.503856       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 21:37:43.512518       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 21:37:53.487732       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [d2cb624370de704a2f2fb8a42c2d5a2a132c9c0f57353890ebc7ffa8d923605a] <==
	I1217 21:37:09.265055       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 21:37:09.275741       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 21:37:09.281301       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 21:37:09.284663       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1217 21:37:09.284755       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1217 21:37:09.287222       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 21:37:09.287380       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 21:37:09.287419       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 21:37:09.287448       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 21:37:09.287541       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 21:37:09.287717       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1217 21:37:09.287705       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 21:37:09.287975       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 21:37:09.288068       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 21:37:09.289211       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 21:37:09.289295       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 21:37:09.290327       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 21:37:09.290439       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 21:37:09.290478       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 21:37:09.292269       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 21:37:09.296442       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 21:37:09.305565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 21:37:09.316902       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 21:37:09.341083       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 21:37:24.242639       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1cca7e7499270d5a13c177bfb97573b991962b599df6a6ce2260aed393abbb0d] <==
	I1217 21:37:35.230260       1 server_linux.go:53] "Using iptables proxy"
	I1217 21:37:36.752604       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 21:37:40.121247       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 21:37:40.121290       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1217 21:37:40.121370       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 21:37:40.408085       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 21:37:40.408203       1 server_linux.go:132] "Using iptables Proxier"
	I1217 21:37:40.412862       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 21:37:40.413204       1 server.go:527] "Version info" version="v1.34.3"
	I1217 21:37:40.413388       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 21:37:40.414655       1 config.go:200] "Starting service config controller"
	I1217 21:37:40.414719       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 21:37:40.414762       1 config.go:106] "Starting endpoint slice config controller"
	I1217 21:37:40.414790       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 21:37:40.414847       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 21:37:40.414874       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 21:37:40.415645       1 config.go:309] "Starting node config controller"
	I1217 21:37:40.415695       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 21:37:40.415724       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 21:37:40.515892       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 21:37:40.515987       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 21:37:40.516874       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [eecdfd5180ffaad58834ff194e5867c5f6eabf3b2f43f4e5c424692c8376e31c] <==
	I1217 21:37:10.751128       1 server_linux.go:53] "Using iptables proxy"
	I1217 21:37:10.853733       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 21:37:10.954988       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 21:37:10.955063       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1217 21:37:10.955150       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 21:37:11.001415       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 21:37:11.001496       1 server_linux.go:132] "Using iptables Proxier"
	I1217 21:37:11.011072       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 21:37:11.011429       1 server.go:527] "Version info" version="v1.34.3"
	I1217 21:37:11.011457       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 21:37:11.013227       1 config.go:200] "Starting service config controller"
	I1217 21:37:11.013249       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 21:37:11.013266       1 config.go:106] "Starting endpoint slice config controller"
	I1217 21:37:11.013270       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 21:37:11.013280       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 21:37:11.013284       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 21:37:11.013935       1 config.go:309] "Starting node config controller"
	I1217 21:37:11.013946       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 21:37:11.013952       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 21:37:11.113569       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 21:37:11.113605       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 21:37:11.113640       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2a5ce624f1a768e61331b170f438d08748328a6bad10cd0836782475886d0f78] <==
	I1217 21:37:39.729461       1 serving.go:386] Generated self-signed cert in-memory
	I1217 21:37:42.036348       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 21:37:42.036666       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 21:37:42.042865       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 21:37:42.043043       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 21:37:42.043198       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 21:37:42.043003       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1217 21:37:42.043285       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1217 21:37:42.043056       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 21:37:42.046773       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 21:37:42.043069       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 21:37:42.143609       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1217 21:37:42.143974       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 21:37:42.148187       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [dde6c95ebd91853deb5bebbd95070104e1b043bdf297eae1114f0db44dd281ab] <==
	E1217 21:37:02.359611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 21:37:02.359759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 21:37:02.359840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 21:37:02.360026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 21:37:02.360074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 21:37:02.360190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 21:37:02.360240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 21:37:02.360302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 21:37:02.360327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 21:37:03.203016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 21:37:03.235088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 21:37:03.303095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 21:37:03.318560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 21:37:03.477322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 21:37:03.523489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 21:37:03.547036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 21:37:03.600547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1217 21:37:03.616657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1217 21:37:06.616110       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 21:37:27.254749       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1217 21:37:27.254808       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 21:37:27.255844       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1217 21:37:27.255879       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1217 21:37:27.255897       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1217 21:37:27.255913       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 17 21:37:35 pause-918446 kubelet[1353]: E1217 21:37:35.079862    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-7v8zq\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="3f504d98-d9d6-494d-8e63-da19b280fbb4" pod="kube-system/kindnet-7v8zq"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: E1217 21:37:35.080032    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w6lj6\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c0ee899c-83cb-4b55-a66f-7ddad08cb670" pod="kube-system/kube-proxy-w6lj6"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: E1217 21:37:35.080186    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-jtb8l\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1134b539-53f9-4702-a716-ed4285b2123e" pod="kube-system/coredns-66bc5c9577-jtb8l"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: I1217 21:37:35.083650    1353 scope.go:117] "RemoveContainer" containerID="e54d9f4786a1252ba2f1afa2ea94a40c28d6ddec95c5afb31c0369d80e532d7f"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: E1217 21:37:35.084186    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-7v8zq\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="3f504d98-d9d6-494d-8e63-da19b280fbb4" pod="kube-system/kindnet-7v8zq"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: E1217 21:37:35.084375    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w6lj6\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c0ee899c-83cb-4b55-a66f-7ddad08cb670" pod="kube-system/kube-proxy-w6lj6"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: E1217 21:37:35.084550    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-jtb8l\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1134b539-53f9-4702-a716-ed4285b2123e" pod="kube-system/coredns-66bc5c9577-jtb8l"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: E1217 21:37:35.084713    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-918446\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="30d5bf2c1d75f6460c1279bfe1e180cc" pod="kube-system/etcd-pause-918446"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: E1217 21:37:35.084885    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-918446\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="305702fa0014ad925d7452a45ef04fb8" pod="kube-system/kube-apiserver-pause-918446"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: E1217 21:37:35.085045    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-918446\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="f17c397290e51c5b34e50c8d2666f5e5" pod="kube-system/kube-controller-manager-pause-918446"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: E1217 21:37:35.085208    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-918446\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="f92ee7eab4be3269dcb232790487d996" pod="kube-system/kube-scheduler-pause-918446"
	Dec 17 21:37:35 pause-918446 kubelet[1353]: I1217 21:37:35.652596    1353 setters.go:543] "Node became not ready" node="pause-918446" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-17T21:37:35Z","lastTransitionTime":"2025-12-17T21:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"}
	Dec 17 21:37:36 pause-918446 kubelet[1353]: E1217 21:37:36.781696    1353 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized" pod="kube-system/coredns-66bc5c9577-jtb8l" podUID="1134b539-53f9-4702-a716-ed4285b2123e"
	Dec 17 21:37:38 pause-918446 kubelet[1353]: E1217 21:37:38.781069    1353 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized" pod="kube-system/coredns-66bc5c9577-jtb8l" podUID="1134b539-53f9-4702-a716-ed4285b2123e"
	Dec 17 21:37:39 pause-918446 kubelet[1353]: E1217 21:37:39.984674    1353 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-918446\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-918446' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 17 21:37:39 pause-918446 kubelet[1353]: E1217 21:37:39.990559    1353 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-918446\" is forbidden: User \"system:node:pause-918446\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-918446' and this object" podUID="30d5bf2c1d75f6460c1279bfe1e180cc" pod="kube-system/etcd-pause-918446"
	Dec 17 21:37:39 pause-918446 kubelet[1353]: E1217 21:37:39.991028    1353 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-918446\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-918446' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 17 21:37:39 pause-918446 kubelet[1353]: E1217 21:37:39.991166    1353 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-918446\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-918446' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 17 21:37:40 pause-918446 kubelet[1353]: E1217 21:37:40.030462    1353 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-918446\" is forbidden: User \"system:node:pause-918446\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-918446' and this object" podUID="305702fa0014ad925d7452a45ef04fb8" pod="kube-system/kube-apiserver-pause-918446"
	Dec 17 21:37:40 pause-918446 kubelet[1353]: E1217 21:37:40.054709    1353 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-918446\" is forbidden: User \"system:node:pause-918446\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-918446' and this object" podUID="f17c397290e51c5b34e50c8d2666f5e5" pod="kube-system/kube-controller-manager-pause-918446"
	Dec 17 21:37:40 pause-918446 kubelet[1353]: I1217 21:37:40.784475    1353 scope.go:117] "RemoveContainer" containerID="435b580572a8bd0462449f84c02d6482d96d57145cf44d953cdbce897db99730"
	Dec 17 21:37:45 pause-918446 kubelet[1353]: W1217 21:37:45.011945    1353 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 17 21:37:53 pause-918446 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 21:37:53 pause-918446 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 21:37:53 pause-918446 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-918446 -n pause-918446
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-918446 -n pause-918446: exit status 2 (388.72311ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-918446 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (7200.092s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 22:08:21.928601  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 22:08:56.661771  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 22:09:42.912083  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/auto-225508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 22:09:42.919799  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/auto-225508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 22:09:42.931183  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/auto-225508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 22:09:42.953334  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/auto-225508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 22:09:42.994658  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/auto-225508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 22:09:43.083710  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/auto-225508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 22:09:43.245357  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/auto-225508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 22:09:43.567416  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/auto-225508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 22:09:44.209181  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/auto-225508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 22:09:45.490528  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/auto-225508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 22:09:48.052179  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/auto-225508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 22:09:53.173868  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/auto-225508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 22:10:03.416203  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/auto-225508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 22:10:23.898203  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/auto-225508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 22:10:30.852090  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 22:11:02.286256  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/old-k8s-version-391059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 22:11:07.168639  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/kindnet-225508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1217 22:11:08.569830  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/default-k8s-diff-port-310739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (32m54s)
		TestNetworkPlugins/group/enable-default-cni (1m34s)
		TestStartStop (34m51s)
		TestStartStop/group/no-preload (28m44s)
		TestStartStop/group/no-preload/serial (28m44s)
		TestStartStop/group/no-preload/serial/AddonExistsAfterStop (3m10s)

                                                
                                                
goroutine 6395 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2682 +0x2b0
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive, 28 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x40004c4fc0, 0x40008d5bb8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
testing.runTests(0x40006d22a0, {0x534c680, 0x2c, 0x2c}, {0x40008d5d08?, 0x125774?, 0x53750c0?})
	/usr/local/go/src/testing/testing.go:2475 +0x3b8
testing.(*M).Run(0x40006b72c0)
	/usr/local/go/src/testing/testing.go:2337 +0x530
k8s.io/minikube/test/integration.TestMain(0x40006b72c0)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xf0
main.main()
	_testmain.go:133 +0x88

                                                
                                                
goroutine 6299 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001e0bbc0, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 6288
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3829 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3828
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5346 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5345
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 6304 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6303
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1503 [chan receive, 83 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001995f20, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1501
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4093 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40018cab50, 0x16)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40018cab40)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702ae0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40016f30e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001de6af0?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e6930?, 0x4000082310?}, 0x4001794ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e6930, 0x4000082310}, 0x40000d8f38, {0x369e4a0, 0x4001709f50}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4001794fa8?, {0x369e4a0?, 0x4001709f50?}, 0x90?, 0x161f90?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001d19440, 0x3b9aca00, 0x0, 0x1, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4090
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5616 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff5e0, {{0x36f4250, 0x40001bc080?}, 0x400158afc0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5615
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5646 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5645
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1502 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff5e0, {{0x36f4250, 0x40001bc080?}, 0x4001474700?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1501
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1576 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x4001d66a10, 0x24)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001d66a00)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702ae0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001995f20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40019a4ab0?, 0x40002f1270?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e6930?, 0x4000082310?}, 0x40002f12e0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e6930, 0x4000082310}, 0x40012e0f38, {0x369e4a0, 0x4001cfdad0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40002f1260?, {0x369e4a0?, 0x4001cfdad0?}, 0x90?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001e93930, 0x3b9aca00, 0x0, 0x1, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1503
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 6302 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4001cd9a10, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001cd9a00)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702ae0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001e0bbc0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001b62690?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e6930?, 0x4000082310?}, 0x40002b52e0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e6930, 0x4000082310}, 0x40013cff38, {0x369e4a0, 0x400173e3c0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e4a0?, 0x400173e3c0?}, 0x1?, 0x36e6598?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40016cd520, 0x3b9aca00, 0x0, 0x1, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6299
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1577 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e6930, 0x4000082310}, 0x4001646740, 0x4001563f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e6930, 0x4000082310}, 0x38?, 0x4001646740, 0x4001646788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e6930?, 0x4000082310?}, 0x4001ca8a80?, 0x4001cc0000?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001d87200?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1503
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 177 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40016f2ae0, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 167
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 160 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff5e0, {{0x36f4250, 0x40001bc080?}, 0x40015b81e0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 167
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5986 [chan receive, 3 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40018401e0, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5970
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 174 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x400044c910, 0x2d)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400044c900)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702ae0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40016f2ae0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40004ef9d0?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e6930?, 0x4000082310?}, 0x400009eea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e6930, 0x4000082310}, 0x4001298f38, {0x369e4a0, 0x40012c4ba0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x400009efa8?, {0x369e4a0?, 0x40012c4ba0?}, 0x70?, 0x400023f098?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40015a0960, 0x3b9aca00, 0x0, 0x1, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 177
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 175 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e6930, 0x4000082310}, 0x40000a1f40, 0x40013c9f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e6930, 0x4000082310}, 0xf8?, 0x40000a1f40, 0x40000a1f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e6930?, 0x4000082310?}, 0x0?, 0x40000a1f50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f4250?, 0x40001bc080?, 0x40015b81e0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 177
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 176 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 175
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5804 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36e6528, 0x40006ac5a0}, {0x36d45e0, 0x40019e59c0}, 0x1, 0x0, 0x4001813b00)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/loop.go:66 +0x158
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36e6598?, 0x400022ec40?}, 0x3b9aca00, 0x4001813d28?, 0x1, 0x4001813b00)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:48 +0x8c
k8s.io/minikube/test/integration.PodWait({0x36e6598, 0x400022ec40}, 0x40006541c0, {0x4001d0c5e8, 0x11}, {0x2994202, 0x14}, {0x29ac171, 0x1c}, 0x7dba821800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:380 +0x22c
k8s.io/minikube/test/integration.validateAddonAfterStop({0x36e6598, 0x400022ec40}, 0x40006541c0, {0x4001d0c5e8, 0x11}, {0x297870e?, 0x38584f9700161e84?}, {0x694329cd?, 0x4001596f58?}, {0x161f08?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:285 +0xd4
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x40006541c0?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x40006541c0, 0x400166c000)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3944
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 2020 [chan send, 81 minutes]:
os/exec.(*Cmd).watchCtx(0x400073c780, 0x4001a2a7e0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1514
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 978 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e6930, 0x4000082310}, 0x400164df40, 0x400129df88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e6930, 0x4000082310}, 0x38?, 0x400164df40, 0x400164df88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e6930?, 0x4000082310?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 973
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4335 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4334
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 979 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 978
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3944 [chan receive, 3 minutes]:
testing.(*T).Run(0x40016e8c40, {0x2994252?, 0x6ee?}, 0x400166c000)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x40016e8c40)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x40016e8c40, 0x4001b9a200)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3514
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3801 [chan receive, 30 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40016f20c0, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3823
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4329 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff5e0, {{0x36f4250, 0x40001bc080?}, 0x400148a000?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4328
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5085 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4001d67950, 0xf)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001d67940)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702ae0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40019d2300)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40012f4ee0?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e6930?, 0x4000082310?}, 0x400141fea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e6930, 0x4000082310}, 0x40012dcf38, {0x369e4a0, 0x40017346c0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e4a0?, 0x40017346c0?}, 0x1?, 0x36e6598?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4004f11110, 0x3b9aca00, 0x0, 0x1, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5082
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1236 [chan send, 112 minutes]:
os/exec.(*Cmd).watchCtx(0x4001d87680, 0x4001de6070)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1235
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 709 [IO wait, 114 minutes]:
internal/poll.runtime_pollWait(0xffff3adddc00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400048e600?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x400048e600)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x400048e600)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40017e4b80)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40017e4b80)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4000104e00, {0x36d3f80, 0x40017e4b80})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4000104e00)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 707
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 1331 [IO wait, 110 minutes]:
internal/poll.runtime_pollWait(0xffff3a95b400, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400048ed80?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x400048ed80)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x400048ed80)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x4001d66e00)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x4001d66e00)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4001ac2400, {0x36d3f80, 0x4001d66e00})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4001ac2400)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1329
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 5990 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e6930, 0x4000082310}, 0x4001790740, 0x4001790788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e6930, 0x4000082310}, 0x0?, 0x4001790740, 0x4001790788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e6930?, 0x4000082310?}, 0x36e6598?, 0x4001e5ef50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400011be00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5986
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4095 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4094
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3334 [chan receive, 35 minutes]:
testing.(*T).Run(0x4001474380, {0x296d71f?, 0x40013caf58?}, 0x339bcf8)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop(0x4001474380)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x3c
testing.tRunner(0x4001474380, 0x339bb10)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4090 [chan receive, 26 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40016f30e0, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4097
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5989 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x400175ad50, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400175ad40)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702ae0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40018401e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001e5f110?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e6930?, 0x4000082310?}, 0x40013ebea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e6930, 0x4000082310}, 0x400129cf38, {0x369e4a0, 0x40016d3ad0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e4a0?, 0x40016d3ad0?}, 0x28?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4004f11b30, 0x3b9aca00, 0x0, 0x1, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5986
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5082 [chan receive, 7 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40019d2300, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5077
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 6394 [IO wait]:
internal/poll.runtime_pollWait(0xffff3addda00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400139a4e0?, 0x4001470a00?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x400139a4e0, {0x4001470a00, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40004931e8, {0x4001470a00?, 0x4001648568?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x40012c4480, {0x369c878, 0x4001e66b18})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369ca60, 0x40012c4480}, {0x369c878, 0x4001e66b18}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40004931e8?, {0x369ca60, 0x40012c4480})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40004931e8, {0x369ca60, 0x40012c4480})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369ca60, 0x40012c4480}, {0x369c8f8, 0x40004931e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x40015d9dc0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 3629
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 4094 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e6930, 0x4000082310}, 0x400164cf40, 0x400164cf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e6930, 0x4000082310}, 0x4e?, 0x400164cf40, 0x400164cf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e6930?, 0x4000082310?}, 0x0?, 0x400164cf50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f4250?, 0x40001bc080?, 0x400073c780?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4090
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 6322 [IO wait]:
internal/poll.runtime_pollWait(0xffff3addd400, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001b9a700?, 0x40008c4800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001b9a700, {0x40008c4800, 0x1800, 0x1800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
net.(*netFD).Read(0x4001b9a700, {0x40008c4800?, 0x40008c4800?, 0x5?})
	/usr/local/go/src/net/fd_posix.go:68 +0x28
net.(*conn).Read(0x4001e66418, {0x40008c4800?, 0x40013838a8?, 0x8b27c?})
	/usr/local/go/src/net/net.go:196 +0x34
crypto/tls.(*atLeastReader).Read(0x40019c6540, {0x40008c4800?, 0x4001383908?, 0x2cbb64?})
	/usr/local/go/src/crypto/tls/conn.go:816 +0x38
bytes.(*Buffer).ReadFrom(0x40012ce2a8, {0x369ebc0, 0x40019c6540})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
crypto/tls.(*Conn).readFromUntil(0x40012ce008, {0xffff3a9e4a80, 0x4001cd5890}, 0x40013839b0?)
	/usr/local/go/src/crypto/tls/conn.go:838 +0xcc
crypto/tls.(*Conn).readRecordOrCCS(0x40012ce008, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:627 +0x340
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:589
crypto/tls.(*Conn).Read(0x40012ce008, {0x4001d77000, 0x1000, 0x4000000000?})
	/usr/local/go/src/crypto/tls/conn.go:1392 +0x14c
bufio.(*Reader).Read(0x4001a3b380, {0x400015f384, 0x9, 0x542a60?})
	/usr/local/go/src/bufio/bufio.go:245 +0x188
io.ReadAtLeast({0x369cb00, 0x4001a3b380}, {0x400015f384, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x98
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0x400015f384, 0x9, 0x4000000015?}, {0x369cb00?, 0x4001a3b380?})
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/frame.go:242 +0x58
golang.org/x/net/http2.(*Framer).ReadFrameHeader(0x400015f340)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/frame.go:505 +0x60
golang.org/x/net/http2.(*Framer).ReadFrame(0x400015f340)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/frame.go:564 +0x20
golang.org/x/net/http2.(*clientConnReadLoop).run(0x4001383f98)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/transport.go:2208 +0xb8
golang.org/x/net/http2.(*ClientConn).readLoop(0x4001481500)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/transport.go:2077 +0x4c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 6321
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/transport.go:866 +0xa90

                                                
                                                
goroutine 3269 [chan receive, 32 minutes]:
testing.(*T).Run(0x4001480a80, {0x296d71f?, 0xd32c7be05cf?}, 0x40016ca2b8)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins(0x4001480a80)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xe4
testing.tRunner(0x4001480a80, 0x339bac8)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5645 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e6930, 0x4000082310}, 0x40000a4f40, 0x40000a4f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e6930, 0x4000082310}, 0x18?, 0x40000a4f40, 0x40000a4f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e6930?, 0x4000082310?}, 0x0?, 0x40000a4f50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f4250?, 0x40001bc080?, 0x400158afc0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5649
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5081 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff5e0, {{0x36f4250, 0x40001bc080?}, 0x4000447080?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5077
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5331 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff5e0, {{0x36f4250, 0x40001bc080?}, 0x4001481500?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5330
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3590 [chan receive, 7 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x40015d88c0, 0x40016ca2b8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3269
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3827 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4001d66010, 0x17)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001d66000)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702ae0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40016f20c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40004b7500?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e6930?, 0x4000082310?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e6930, 0x4000082310}, 0x400129af38, {0x369e4a0, 0x40018f44e0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f4250?, {0x369e4a0?, 0x40018f44e0?}, 0xc0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001b7af40, 0x3b9aca00, 0x0, 0x1, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3801
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1301 [select, 110 minutes]:
net/http.(*persistConn).writeLoop(0x40015d4120)
	/usr/local/go/src/net/http/transport.go:2600 +0x94
created by net/http.(*Transport).dialConn in goroutine 1298
	/usr/local/go/src/net/http/transport.go:1948 +0x1164

                                                
                                                
goroutine 3828 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e6930, 0x4000082310}, 0x400141ff40, 0x40012def88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e6930, 0x4000082310}, 0x90?, 0x400141ff40, 0x400141ff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e6930?, 0x4000082310?}, 0x0?, 0x95c64?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001480380?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3801
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4333 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x40018ca590, 0x2)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40018ca580)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702ae0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001e0ae40)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40002b5b20?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e6930?, 0x4000082310?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e6930, 0x4000082310}, 0x40008d7f38, {0x369e4a0, 0x40019f3b60}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f4250?, {0x369e4a0?, 0x40019f3b60?}, 0xc0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4004f11e70, 0x3b9aca00, 0x0, 0x1, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4330
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 6298 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff5e0, {{0x36f4250, 0x40001bc080?}, 0x4001a7ea80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 6288
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1206 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0x4001d86a80, 0x4001d27ab0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 819
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 6303 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e6930, 0x4000082310}, 0x4001793f40, 0x4001793f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e6930, 0x4000082310}, 0x2?, 0x4001793f40, 0x4001793f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e6930?, 0x4000082310?}, 0x0?, 0x4001793f50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f4250?, 0x40001bc080?, 0x4001a7ea80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6299
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 1934 [chan send, 81 minutes]:
os/exec.(*Cmd).watchCtx(0x4001626d80, 0x40012f5030)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1933
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5312 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4001c2ced0, 0xc)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001c2cec0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702ae0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001e0a5a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40016bd420?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e6930?, 0x4000082310?}, 0x4001792ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e6930, 0x4000082310}, 0x4001565f38, {0x369e4a0, 0x400173efc0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e4a0?, 0x400173efc0?}, 0x20?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001b7b650, 0x3b9aca00, 0x0, 0x1, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5332
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1158 [chan send, 112 minutes]:
os/exec.(*Cmd).watchCtx(0x4001d1b500, 0x4001d26a80)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1157
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5649 [chan receive, 3 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40019ad1a0, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5615
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5332 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001e0a5a0, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5330
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1921 [chan send, 81 minutes]:
os/exec.(*Cmd).watchCtx(0x4001626300, 0x40012f4770)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1888
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3628 [chan receive, 32 minutes]:
testing.(*testState).waitParallel(0x40006ac3c0)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001474e00)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001474e00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001474e00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001474e00, 0x400015b580)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3590
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3629 [syscall]:
syscall.Syscall6(0x5f, 0x3, 0xf, 0x40008d30e8, 0x4, 0x40016debd0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x40008d3248?, 0x1929a0?, 0x400169b320?, 0x1?, 0x400043ef00?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x40006ea640)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0xffff81869108?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x400137de00)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x400137de00)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
os/exec.(*Cmd).CombinedOutput(0x400137de00)
	/usr/local/go/src/os/exec/exec.go:1039 +0x7c
k8s.io/minikube/test/integration.debugLogs(0x4001475180, {0x4001b9e460, 0x19})
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:546 +0x5e7c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001475180)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:211 +0x980
testing.tRunner(0x4001475180, 0x400015b600)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3590
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4330 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001e0ae40, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4328
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4089 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff5e0, {{0x36f4250, 0x40001bc080?}, 0x400073c780?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4097
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5644 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x400175a8d0, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400175a8c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702ae0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40019ad1a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40016bdce0?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e6930?, 0x4000082310?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e6930, 0x4000082310}, 0x400159df38, {0x369e4a0, 0x4001296210}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e4a0?, 0x4001296210?}, 0x1?, 0x36e6598?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001a053a0, 0x3b9aca00, 0x0, 0x1, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5649
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3514 [chan receive, 28 minutes]:
testing.(*T).Run(0x40014748c0, {0x296eb91?, 0x0?}, 0x4001b9a200)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x40014748c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x40014748c0, 0x4001cd8200)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3510
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1578 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1577
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5985 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff5e0, {{0x36f4250, 0x40001bc080?}, 0x4001626900?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5970
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5345 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e6930, 0x4000082310}, 0x400164df40, 0x400164df88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e6930, 0x4000082310}, 0x0?, 0x400164df40, 0x400164df88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e6930?, 0x4000082310?}, 0x36e6598?, 0x40004ef420?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x40004ef340?, 0x0?, 0x400011b500?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5332
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5991 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5990
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1300 [select, 110 minutes]:
net/http.(*persistConn).readLoop(0x40015d4120)
	/usr/local/go/src/net/http/transport.go:2398 +0xa6c
created by net/http.(*Transport).dialConn in goroutine 1298
	/usr/local/go/src/net/http/transport.go:1947 +0x111c

                                                
                                                
goroutine 973 [chan receive, 112 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40016f3c20, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 971
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 977 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x400175a0d0, 0x2c)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400175a0c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702ae0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40016f3c20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40004b7810?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e6930?, 0x4000082310?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e6930, 0x4000082310}, 0x4001564f38, {0x369e4a0, 0x40017514d0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f4250?, {0x369e4a0?, 0x40017514d0?}, 0xc0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40015a0750, 0x3b9aca00, 0x0, 0x1, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 973
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3630 [chan receive, 32 minutes]:
testing.(*testState).waitParallel(0x40006ac3c0)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001475500)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001475500)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001475500)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001475500, 0x400015b680)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3590
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5087 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5086
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 972 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff5e0, {{0x36f4250, 0x40001bc080?}, 0x4001623340?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 971
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4334 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e6930, 0x4000082310}, 0x40012dff40, 0x40012dff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e6930, 0x4000082310}, 0x10?, 0x40012dff40, 0x40012dff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e6930?, 0x4000082310?}, 0x0?, 0x95c64?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40015d9dc0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4330
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3510 [chan receive, 7 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4001474000, 0x339bcf8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3334
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3800 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff5e0, {{0x36f4250, 0x40001bc080?}, 0x4001a7e000?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3823
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5086 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e6930, 0x4000082310}, 0x400009ef40, 0x400009ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e6930, 0x4000082310}, 0x2?, 0x400009ef40, 0x400009ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e6930?, 0x4000082310?}, 0x0?, 0x400009ef50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f4250?, 0x40001bc080?, 0x4000447080?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5082
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                    

Test pass (238/316)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.65
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.3/json-events 5.49
13 TestDownloadOnly/v1.34.3/preload-exists 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.07
18 TestDownloadOnly/v1.34.3/DeleteAll 0.21
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-rc.1/json-events 5.73
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.6
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 129.51
40 TestAddons/serial/GCPAuth/Namespaces 0.19
41 TestAddons/serial/GCPAuth/FakeCredentials 9.86
57 TestAddons/StoppedEnableDisable 12.55
58 TestCertOptions 35.41
59 TestCertExpiration 241.03
61 TestForceSystemdFlag 39.1
62 TestForceSystemdEnv 43.85
67 TestErrorSpam/setup 31.98
68 TestErrorSpam/start 0.86
69 TestErrorSpam/status 1.25
70 TestErrorSpam/pause 6.31
71 TestErrorSpam/unpause 5.47
72 TestErrorSpam/stop 1.52
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 52.27
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 28.05
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.08
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.46
84 TestFunctional/serial/CacheCmd/cache/add_local 1.32
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.95
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
92 TestFunctional/serial/ExtraConfig 30.99
93 TestFunctional/serial/ComponentHealth 0.1
94 TestFunctional/serial/LogsCmd 1.47
95 TestFunctional/serial/LogsFileCmd 1.5
96 TestFunctional/serial/InvalidService 4.67
98 TestFunctional/parallel/ConfigCmd 0.47
99 TestFunctional/parallel/DashboardCmd 11.53
100 TestFunctional/parallel/DryRun 0.56
101 TestFunctional/parallel/InternationalLanguage 0.21
102 TestFunctional/parallel/StatusCmd 1.31
106 TestFunctional/parallel/ServiceCmdConnect 7.61
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 20.86
110 TestFunctional/parallel/SSHCmd 0.78
111 TestFunctional/parallel/CpCmd 1.99
113 TestFunctional/parallel/FileSync 0.34
114 TestFunctional/parallel/CertSync 2.17
118 TestFunctional/parallel/NodeLabels 0.12
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
122 TestFunctional/parallel/License 0.33
123 TestFunctional/parallel/Version/short 0.09
124 TestFunctional/parallel/Version/components 0.8
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
129 TestFunctional/parallel/ImageCommands/ImageBuild 4.06
130 TestFunctional/parallel/ImageCommands/Setup 0.69
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.55
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.47
138 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.59
139 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.44
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.47
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.77
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.47
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
147 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
151 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
152 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
153 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
154 TestFunctional/parallel/ProfileCmd/profile_list 0.44
155 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
156 TestFunctional/parallel/ServiceCmd/List 0.7
157 TestFunctional/parallel/MountCmd/any-port 7.95
158 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
159 TestFunctional/parallel/ServiceCmd/HTTPS 0.58
160 TestFunctional/parallel/ServiceCmd/Format 0.5
161 TestFunctional/parallel/ServiceCmd/URL 0.48
162 TestFunctional/parallel/MountCmd/specific-port 2.09
163 TestFunctional/parallel/MountCmd/VerifyCleanup 2.53
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 3.59
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 1.09
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.33
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.8
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.12
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 0.93
190 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 1.04
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.43
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.44
196 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.25
202 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.14
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.74
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 2.19
208 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.28
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.69
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.54
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.19
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.1
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.39
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.41
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.39
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 2.15
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 2.18
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.06
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.5
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.22
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.22
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.23
245 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.23
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 3.82
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.26
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.2
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 0.82
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 1.05
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.36
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.53
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 0.76
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.43
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.15
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.18
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.14
258 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 148.88
265 TestMultiControlPlane/serial/DeployApp 7.16
266 TestMultiControlPlane/serial/PingHostFromPods 1.51
267 TestMultiControlPlane/serial/AddWorkerNode 30.56
268 TestMultiControlPlane/serial/NodeLabels 0.11
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
270 TestMultiControlPlane/serial/CopyFile 19.89
271 TestMultiControlPlane/serial/StopSecondaryNode 12.84
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
273 TestMultiControlPlane/serial/RestartSecondaryNode 33.2
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.4
276 TestMultiControlPlane/serial/DeleteSecondaryNode 12.01
278 TestMultiControlPlane/serial/StopCluster 36.07
279 TestMultiControlPlane/serial/RestartCluster 91.98
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
281 TestMultiControlPlane/serial/AddSecondaryNode 80.28
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.03
287 TestJSONOutput/start/Command 52.62
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.87
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.25
312 TestKicCustomNetwork/create_custom_network 40.01
313 TestKicCustomNetwork/use_default_bridge_network 35.15
314 TestKicExistingNetwork 34.66
315 TestKicCustomSubnet 35.59
316 TestKicStaticIP 34.89
317 TestMainNoArgs 0.06
318 TestMinikubeProfile 78
321 TestMountStart/serial/StartWithMountFirst 8.83
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 9.28
324 TestMountStart/serial/VerifyMountSecond 0.27
325 TestMountStart/serial/DeleteFirst 1.74
326 TestMountStart/serial/VerifyMountPostDelete 0.3
327 TestMountStart/serial/Stop 1.33
328 TestMountStart/serial/RestartStopped 8.16
329 TestMountStart/serial/VerifyMountPostStop 0.27
332 TestMultiNode/serial/FreshStart2Nodes 79.44
333 TestMultiNode/serial/DeployApp2Nodes 4.74
334 TestMultiNode/serial/PingHostFrom2Pods 0.93
335 TestMultiNode/serial/AddNode 31.61
336 TestMultiNode/serial/MultiNodeLabels 0.1
337 TestMultiNode/serial/ProfileList 0.71
338 TestMultiNode/serial/CopyFile 10.27
339 TestMultiNode/serial/StopNode 2.38
340 TestMultiNode/serial/StartAfterStop 8.46
341 TestMultiNode/serial/RestartKeepsNodes 73.2
342 TestMultiNode/serial/DeleteNode 5.63
343 TestMultiNode/serial/StopMultiNode 24.03
344 TestMultiNode/serial/RestartMultiNode 48.5
345 TestMultiNode/serial/ValidateNameConflict 36.83
350 TestPreload 125.34
352 TestScheduledStopUnix 108.05
355 TestInsufficientStorage 10.01
356 TestRunningBinaryUpgrade 300.33
359 TestMissingContainerUpgrade 118.87
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
362 TestNoKubernetes/serial/StartWithK8s 48.21
363 TestNoKubernetes/serial/StartWithStopK8s 111.57
364 TestNoKubernetes/serial/Start 10.18
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
367 TestNoKubernetes/serial/ProfileList 1.01
368 TestNoKubernetes/serial/Stop 1.3
369 TestNoKubernetes/serial/StartNoArgs 7.16
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
371 TestStoppedBinaryUpgrade/Setup 1.66
372 TestStoppedBinaryUpgrade/Upgrade 299.18
373 TestStoppedBinaryUpgrade/MinikubeLogs 1.8
382 TestPause/serial/Start 52.8
383 TestPause/serial/SecondStartNoReconfiguration 26.92
x
+
TestDownloadOnly/v1.28.0/json-events (7.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-306451 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-306451 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.653388347s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1217 20:11:32.063204  488412 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1217 20:11:32.063281  488412 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-306451
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-306451: exit status 85 (89.994978ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-306451 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-306451 │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:11:24
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:11:24.457932  488418 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:11:24.458054  488418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:11:24.458063  488418 out.go:374] Setting ErrFile to fd 2...
	I1217 20:11:24.458069  488418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:11:24.458320  488418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	W1217 20:11:24.458454  488418 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21808-485134/.minikube/config/config.json: open /home/jenkins/minikube-integration/21808-485134/.minikube/config/config.json: no such file or directory
	I1217 20:11:24.458865  488418 out.go:368] Setting JSON to true
	I1217 20:11:24.459741  488418 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10434,"bootTime":1765991851,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:11:24.459813  488418 start.go:143] virtualization:  
	I1217 20:11:24.465843  488418 out.go:99] [download-only-306451] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1217 20:11:24.466082  488418 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball: no such file or directory
	I1217 20:11:24.466211  488418 notify.go:221] Checking for updates...
	I1217 20:11:24.469883  488418 out.go:171] MINIKUBE_LOCATION=21808
	I1217 20:11:24.473512  488418 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:11:24.476806  488418 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:11:24.480024  488418 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:11:24.483075  488418 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1217 20:11:24.489471  488418 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 20:11:24.489816  488418 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:11:24.513931  488418 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:11:24.514062  488418 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:11:24.582600  488418 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-17 20:11:24.56757741 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:11:24.582710  488418 docker.go:319] overlay module found
	I1217 20:11:24.585856  488418 out.go:99] Using the docker driver based on user configuration
	I1217 20:11:24.585948  488418 start.go:309] selected driver: docker
	I1217 20:11:24.585971  488418 start.go:927] validating driver "docker" against <nil>
	I1217 20:11:24.586081  488418 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:11:24.645188  488418 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-17 20:11:24.636306379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:11:24.645333  488418 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 20:11:24.645643  488418 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1217 20:11:24.645822  488418 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 20:11:24.649169  488418 out.go:171] Using Docker driver with root privileges
	I1217 20:11:24.652242  488418 cni.go:84] Creating CNI manager for ""
	I1217 20:11:24.652310  488418 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:11:24.652323  488418 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 20:11:24.652403  488418 start.go:353] cluster config:
	{Name:download-only-306451 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-306451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:11:24.655394  488418 out.go:99] Starting "download-only-306451" primary control-plane node in "download-only-306451" cluster
	I1217 20:11:24.655421  488418 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:11:24.658360  488418 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:11:24.658395  488418 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 20:11:24.658636  488418 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:11:24.678110  488418 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:11:24.678131  488418 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 20:11:24.678279  488418 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 20:11:24.678378  488418 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 20:11:24.727268  488418 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1217 20:11:24.727296  488418 cache.go:65] Caching tarball of preloaded images
	I1217 20:11:24.727458  488418 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 20:11:24.730867  488418 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1217 20:11:24.730892  488418 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1217 20:11:24.816516  488418 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1217 20:11:24.816647  488418 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-306451 host does not exist
	  To start a cluster, run: "minikube start -p download-only-306451"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-306451
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (5.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-705173 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-705173 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.493193121s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (5.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1217 20:11:38.014270  488412 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
I1217 20:11:38.014312  488412 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-705173
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-705173: exit status 85 (73.919686ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-306451 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-306451 │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │ 17 Dec 25 20:11 UTC │
	│ delete  │ -p download-only-306451                                                                                                                                                   │ download-only-306451 │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │ 17 Dec 25 20:11 UTC │
	│ start   │ -o=json --download-only -p download-only-705173 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-705173 │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:11:32
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:11:32.563105  488618 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:11:32.563244  488618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:11:32.563255  488618 out.go:374] Setting ErrFile to fd 2...
	I1217 20:11:32.563261  488618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:11:32.563674  488618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:11:32.564715  488618 out.go:368] Setting JSON to true
	I1217 20:11:32.565670  488618 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10442,"bootTime":1765991851,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:11:32.565782  488618 start.go:143] virtualization:  
	I1217 20:11:32.569332  488618 out.go:99] [download-only-705173] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:11:32.569592  488618 notify.go:221] Checking for updates...
	I1217 20:11:32.572677  488618 out.go:171] MINIKUBE_LOCATION=21808
	I1217 20:11:32.575792  488618 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:11:32.578779  488618 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:11:32.581921  488618 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:11:32.584909  488618 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1217 20:11:32.590610  488618 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 20:11:32.590991  488618 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:11:32.621259  488618 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:11:32.621393  488618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:11:32.681267  488618 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-17 20:11:32.671892071 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:11:32.681372  488618 docker.go:319] overlay module found
	I1217 20:11:32.685489  488618 out.go:99] Using the docker driver based on user configuration
	I1217 20:11:32.685542  488618 start.go:309] selected driver: docker
	I1217 20:11:32.685551  488618 start.go:927] validating driver "docker" against <nil>
	I1217 20:11:32.685684  488618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:11:32.741713  488618 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-17 20:11:32.732264834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:11:32.741870  488618 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 20:11:32.742136  488618 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1217 20:11:32.742308  488618 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 20:11:32.745579  488618 out.go:171] Using Docker driver with root privileges
	I1217 20:11:32.748403  488618 cni.go:84] Creating CNI manager for ""
	I1217 20:11:32.748486  488618 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:11:32.748499  488618 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 20:11:32.748580  488618 start.go:353] cluster config:
	{Name:download-only-705173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:download-only-705173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:11:32.751683  488618 out.go:99] Starting "download-only-705173" primary control-plane node in "download-only-705173" cluster
	I1217 20:11:32.751711  488618 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:11:32.754583  488618 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:11:32.754642  488618 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:11:32.754744  488618 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:11:32.774486  488618 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:11:32.774509  488618 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 20:11:32.774622  488618 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 20:11:32.774645  488618 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1217 20:11:32.774650  488618 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1217 20:11:32.774662  488618 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1217 20:11:32.817365  488618 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1217 20:11:32.817413  488618 cache.go:65] Caching tarball of preloaded images
	I1217 20:11:32.817589  488618 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:11:32.820706  488618 out.go:99] Downloading Kubernetes v1.34.3 preload ...
	I1217 20:11:32.820747  488618 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1217 20:11:32.906774  488618 preload.go:295] Got checksum from GCS API "c7c3cca4fcbe5ef642ca3e3e5575910e"
	I1217 20:11:32.906834  488618 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4?checksum=md5:c7c3cca4fcbe5ef642ca3e3e5575910e -> /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4
	I1217 20:11:37.420648  488618 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:11:37.421126  488618 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/download-only-705173/config.json ...
	I1217 20:11:37.421168  488618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/download-only-705173/config.json: {Name:mk108ecbd2903206b439e040a6474b5b2dbdf69a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:11:37.421371  488618 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:11:37.421549  488618 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21808-485134/.minikube/cache/linux/arm64/v1.34.3/kubectl
	
	
	* The control-plane node download-only-705173 host does not exist
	  To start a cluster, run: "minikube start -p download-only-705173"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-705173
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (5.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-496345 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-496345 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.727564635s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (5.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1217 20:11:44.167458  488412 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
I1217 20:11:44.167503  488412 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-496345
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-496345: exit status 85 (64.525713ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-306451 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio      │ download-only-306451 │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │ 17 Dec 25 20:11 UTC │
	│ delete  │ -p download-only-306451                                                                                                                                                        │ download-only-306451 │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │ 17 Dec 25 20:11 UTC │
	│ start   │ -o=json --download-only -p download-only-705173 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio      │ download-only-705173 │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │ 17 Dec 25 20:11 UTC │
	│ delete  │ -p download-only-705173                                                                                                                                                        │ download-only-705173 │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │ 17 Dec 25 20:11 UTC │
	│ start   │ -o=json --download-only -p download-only-496345 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-496345 │ jenkins │ v1.37.0 │ 17 Dec 25 20:11 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:11:38
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:11:38.483860  488824 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:11:38.484364  488824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:11:38.484384  488824 out.go:374] Setting ErrFile to fd 2...
	I1217 20:11:38.484390  488824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:11:38.484720  488824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:11:38.485235  488824 out.go:368] Setting JSON to true
	I1217 20:11:38.486109  488824 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10448,"bootTime":1765991851,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:11:38.486182  488824 start.go:143] virtualization:  
	I1217 20:11:38.487828  488824 out.go:99] [download-only-496345] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:11:38.488105  488824 notify.go:221] Checking for updates...
	I1217 20:11:38.489419  488824 out.go:171] MINIKUBE_LOCATION=21808
	I1217 20:11:38.490659  488824 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:11:38.491818  488824 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:11:38.492934  488824 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:11:38.494124  488824 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1217 20:11:38.496321  488824 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 20:11:38.496664  488824 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:11:38.519301  488824 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:11:38.519431  488824 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:11:38.583234  488824 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-17 20:11:38.569280209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:11:38.583343  488824 docker.go:319] overlay module found
	I1217 20:11:38.584690  488824 out.go:99] Using the docker driver based on user configuration
	I1217 20:11:38.584715  488824 start.go:309] selected driver: docker
	I1217 20:11:38.584721  488824 start.go:927] validating driver "docker" against <nil>
	I1217 20:11:38.584829  488824 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:11:38.644659  488824 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-17 20:11:38.635168462 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:11:38.644821  488824 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 20:11:38.645086  488824 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1217 20:11:38.645253  488824 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 20:11:38.646790  488824 out.go:171] Using Docker driver with root privileges
	I1217 20:11:38.648229  488824 cni.go:84] Creating CNI manager for ""
	I1217 20:11:38.648307  488824 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:11:38.648323  488824 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 20:11:38.648420  488824 start.go:353] cluster config:
	{Name:download-only-496345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:download-only-496345 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:11:38.649636  488824 out.go:99] Starting "download-only-496345" primary control-plane node in "download-only-496345" cluster
	I1217 20:11:38.649656  488824 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:11:38.650797  488824 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1217 20:11:38.650834  488824 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:11:38.650869  488824 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 20:11:38.670409  488824 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 20:11:38.670436  488824 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 20:11:38.670545  488824 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 20:11:38.670569  488824 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1217 20:11:38.670574  488824 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1217 20:11:38.670582  488824 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1217 20:11:38.750304  488824 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1217 20:11:38.750338  488824 cache.go:65] Caching tarball of preloaded images
	I1217 20:11:38.750539  488824 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:11:38.752162  488824 out.go:99] Downloading Kubernetes v1.35.0-rc.1 preload ...
	I1217 20:11:38.752200  488824 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1217 20:11:38.839764  488824 preload.go:295] Got checksum from GCS API "efae947990a69f0349b1b3fdbfa98de4"
	I1217 20:11:38.839820  488824 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:efae947990a69f0349b1b3fdbfa98de4 -> /home/jenkins/minikube-integration/21808-485134/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-496345 host does not exist
	  To start a cluster, run: "minikube start -p download-only-496345"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-496345
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1217 20:11:45.528899  488412 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-177191 --alsologtostderr --binary-mirror http://127.0.0.1:41401 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-177191" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-177191
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-052340
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-052340: exit status 85 (63.978913ms)

                                                
                                                
-- stdout --
	* Profile "addons-052340" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-052340"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-052340
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-052340: exit status 85 (73.586273ms)

                                                
                                                
-- stdout --
	* Profile "addons-052340" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-052340"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (129.51s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-052340 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-052340 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m9.512998367s)
--- PASS: TestAddons/Setup (129.51s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-052340 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-052340 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.86s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-052340 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-052340 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [ab7efbf1-1687-403a-8468-b76d1e68a61a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [ab7efbf1-1687-403a-8468-b76d1e68a61a] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004230208s
addons_test.go:696: (dbg) Run:  kubectl --context addons-052340 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-052340 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-052340 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-052340 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.55s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-052340
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-052340: (12.259032498s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-052340
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-052340
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-052340
--- PASS: TestAddons/StoppedEnableDisable (12.55s)

                                                
                                    
x
+
TestCertOptions (35.41s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-904874 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-904874 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (32.529963223s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-904874 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-904874 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-904874 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-904874" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-904874
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-904874: (2.099606487s)
--- PASS: TestCertOptions (35.41s)

                                                
                                    
x
+
TestCertExpiration (241.03s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-919031 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1217 21:38:56.661807  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-919031 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (37.073265516s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-919031 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-919031 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (21.375661533s)
helpers_test.go:176: Cleaning up "cert-expiration-919031" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-919031
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-919031: (2.576521974s)
--- PASS: TestCertExpiration (241.03s)

                                                
                                    
x
+
TestForceSystemdFlag (39.1s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-529066 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1217 21:38:21.928379  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-529066 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.997110689s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-529066 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-529066" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-529066
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-529066: (2.717221521s)
--- PASS: TestForceSystemdFlag (39.10s)

                                                
                                    
x
+
TestForceSystemdEnv (43.85s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-754211 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-754211 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.012721422s)
helpers_test.go:176: Cleaning up "force-systemd-env-754211" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-754211
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-754211: (2.839139967s)
--- PASS: TestForceSystemdEnv (43.85s)

                                                
                                    
x
+
TestErrorSpam/setup (31.98s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-906613 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-906613 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-906613 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-906613 --driver=docker  --container-runtime=crio: (31.982098768s)
--- PASS: TestErrorSpam/setup (31.98s)

                                                
                                    
x
+
TestErrorSpam/start (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 start --dry-run
--- PASS: TestErrorSpam/start (0.86s)

                                                
                                    
x
+
TestErrorSpam/status (1.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 status
--- PASS: TestErrorSpam/status (1.25s)

                                                
                                    
x
+
TestErrorSpam/pause (6.31s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 pause: exit status 80 (2.173169001s)

                                                
                                                
-- stdout --
	* Pausing node nospam-906613 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:18:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 pause: exit status 80 (1.82028013s)

                                                
                                                
-- stdout --
	* Pausing node nospam-906613 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:18:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 pause: exit status 80 (2.312865818s)

                                                
                                                
-- stdout --
	* Pausing node nospam-906613 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:18:06Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.31s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.47s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 unpause: exit status 80 (1.657115151s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-906613 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:18:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 unpause: exit status 80 (1.956843016s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-906613 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:18:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 unpause: exit status 80 (1.858825239s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-906613 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:18:11Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.47s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 stop: (1.313298882s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-906613 --log_dir /tmp/nospam-906613 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-643319 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1217 20:18:56.660903  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:56.667394  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:56.678862  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:56.700279  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:56.741649  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:56.823288  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:56.984812  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:57.306508  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:57.948506  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:59.230120  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:19:01.793063  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:19:06.914539  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-643319 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (52.267886101s)
--- PASS: TestFunctional/serial/StartWithProxy (52.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.05s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1217 20:19:10.400008  488412 config.go:182] Loaded profile config "functional-643319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-643319 --alsologtostderr -v=8
E1217 20:19:17.156813  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:19:37.638917  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-643319 --alsologtostderr -v=8: (28.048729091s)
functional_test.go:678: soft start took 28.04928953s for "functional-643319" cluster.
I1217 20:19:38.449058  488412 config.go:182] Loaded profile config "functional-643319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (28.05s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-643319 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-643319 cache add registry.k8s.io/pause:3.1: (1.22030811s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-643319 cache add registry.k8s.io/pause:3.3: (1.124313101s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-643319 cache add registry.k8s.io/pause:latest: (1.117356297s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-643319 /tmp/TestFunctionalserialCacheCmdcacheadd_local3712753894/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 cache add minikube-local-cache-test:functional-643319
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 cache delete minikube-local-cache-test:functional-643319
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-643319
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-643319 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (282.591751ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-643319 cache reload: (1.006757053s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 kubectl -- --context functional-643319 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-643319 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (30.99s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-643319 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-643319 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (30.989618394s)
functional_test.go:776: restart took 30.989712073s for "functional-643319" cluster.
I1217 20:20:17.130982  488412 config.go:182] Loaded profile config "functional-643319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (30.99s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-643319 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 logs
E1217 20:20:18.600635  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-643319 logs: (1.466407535s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 logs --file /tmp/TestFunctionalserialLogsFileCmd1321728075/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-643319 logs --file /tmp/TestFunctionalserialLogsFileCmd1321728075/001/logs.txt: (1.497129455s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.67s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-643319 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-643319
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-643319: exit status 115 (377.822607ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30517 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-643319 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-643319 delete -f testdata/invalidsvc.yaml: (1.031549293s)
--- PASS: TestFunctional/serial/InvalidService (4.67s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-643319 config get cpus: exit status 14 (79.92184ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-643319 config get cpus: exit status 14 (57.253704ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-643319 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-643319 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 514712: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.53s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-643319 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-643319 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (226.882595ms)

                                                
                                                
-- stdout --
	* [functional-643319] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:21:00.847695  514435 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:21:00.848033  514435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:21:00.848068  514435 out.go:374] Setting ErrFile to fd 2...
	I1217 20:21:00.848094  514435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:21:00.848411  514435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:21:00.848846  514435 out.go:368] Setting JSON to false
	I1217 20:21:00.849832  514435 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11010,"bootTime":1765991851,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:21:00.849946  514435 start.go:143] virtualization:  
	I1217 20:21:00.853204  514435 out.go:179] * [functional-643319] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:21:00.856211  514435 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:21:00.856370  514435 notify.go:221] Checking for updates...
	I1217 20:21:00.862512  514435 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:21:00.865456  514435 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:21:00.868939  514435 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:21:00.872432  514435 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:21:00.877881  514435 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:21:00.880471  514435 config.go:182] Loaded profile config "functional-643319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:21:00.881050  514435 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:21:00.913549  514435 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:21:00.913739  514435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:21:00.991799  514435 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-17 20:21:00.981491266 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:21:00.991911  514435 docker.go:319] overlay module found
	I1217 20:21:00.996717  514435 out.go:179] * Using the docker driver based on existing profile
	I1217 20:21:01.001074  514435 start.go:309] selected driver: docker
	I1217 20:21:01.001104  514435 start.go:927] validating driver "docker" against &{Name:functional-643319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-643319 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:21:01.001242  514435 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:21:01.006163  514435 out.go:203] 
	W1217 20:21:01.009906  514435 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 20:21:01.013342  514435 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-643319 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-643319 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-643319 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (204.459103ms)

                                                
                                                
-- stdout --
	* [functional-643319] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:21:00.641772  514387 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:21:00.641987  514387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:21:00.642022  514387 out.go:374] Setting ErrFile to fd 2...
	I1217 20:21:00.642047  514387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:21:00.642461  514387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:21:00.642896  514387 out.go:368] Setting JSON to false
	I1217 20:21:00.643862  514387 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11010,"bootTime":1765991851,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:21:00.643973  514387 start.go:143] virtualization:  
	I1217 20:21:00.647725  514387 out.go:179] * [functional-643319] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1217 20:21:00.651484  514387 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:21:00.651565  514387 notify.go:221] Checking for updates...
	I1217 20:21:00.657892  514387 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:21:00.660700  514387 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:21:00.663663  514387 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:21:00.666577  514387 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:21:00.669572  514387 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:21:00.672993  514387 config.go:182] Loaded profile config "functional-643319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:21:00.673561  514387 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:21:00.702607  514387 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:21:00.702736  514387 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:21:00.769465  514387 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-17 20:21:00.759108426 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:21:00.769575  514387 docker.go:319] overlay module found
	I1217 20:21:00.772634  514387 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 20:21:00.775549  514387 start.go:309] selected driver: docker
	I1217 20:21:00.775573  514387 start.go:927] validating driver "docker" against &{Name:functional-643319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-643319 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:21:00.775794  514387 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:21:00.779205  514387 out.go:203] 
	W1217 20:21:00.782274  514387 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 20:21:00.785134  514387 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-643319 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-643319 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-2mbfv" [d450d7fc-55d1-4cf3-b471-7b6b1ce1c793] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-2mbfv" [d450d7fc-55d1-4cf3-b471-7b6b1ce1c793] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003991219s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30459
functional_test.go:1680: http://192.168.49.2:30459: success! body:
Request served by hello-node-connect-7d85dfc575-2mbfv

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30459
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [c9939a18-c3a5-4ff3-bf0c-2249a672cb0e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00279491s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-643319 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-643319 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-643319 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-643319 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [88e21d62-13f8-4cf5-a385-5d7f0b6095cc] Pending
helpers_test.go:353: "sp-pod" [88e21d62-13f8-4cf5-a385-5d7f0b6095cc] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004332677s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-643319 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-643319 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-643319 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [183f14dc-0854-46f3-8717-ae11ac793b6f] Pending
helpers_test.go:353: "sp-pod" [183f14dc-0854-46f3-8717-ae11ac793b6f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004319249s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-643319 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.86s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh -n functional-643319 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 cp functional-643319:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd665075458/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh -n functional-643319 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh -n functional-643319 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/488412/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh "sudo cat /etc/test/nested/copy/488412/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/488412.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh "sudo cat /etc/ssl/certs/488412.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/488412.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh "sudo cat /usr/share/ca-certificates/488412.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4884122.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh "sudo cat /etc/ssl/certs/4884122.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4884122.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh "sudo cat /usr/share/ca-certificates/4884122.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-643319 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-643319 ssh "sudo systemctl is-active docker": exit status 1 (372.013075ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-643319 ssh "sudo systemctl is-active containerd": exit status 1 (353.312677ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-643319 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-643319
localhost/kicbase/echo-server:functional-643319
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-643319 image ls --format short --alsologtostderr:
I1217 20:21:10.758663  516116 out.go:360] Setting OutFile to fd 1 ...
I1217 20:21:10.758768  516116 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:21:10.758779  516116 out.go:374] Setting ErrFile to fd 2...
I1217 20:21:10.758784  516116 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:21:10.759038  516116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
I1217 20:21:10.759672  516116 config.go:182] Loaded profile config "functional-643319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 20:21:10.759797  516116 config.go:182] Loaded profile config "functional-643319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 20:21:10.760315  516116 cli_runner.go:164] Run: docker container inspect functional-643319 --format={{.State.Status}}
I1217 20:21:10.779026  516116 ssh_runner.go:195] Run: systemctl --version
I1217 20:21:10.779148  516116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-643319
I1217 20:21:10.801170  516116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-643319/id_rsa Username:docker}
I1217 20:21:10.902746  516116 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-643319 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                  IMAGE                  │                  TAG                  │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ registry.k8s.io/etcd                    │ 3.6.5-0                               │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.3                               │ cf65ae6c8f700 │ 84.8MB │
│ registry.k8s.io/pause                   │ 3.1                                   │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.3                                   │ 3d18732f8686c │ 487kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b                    │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                                    │ ba04bb24b9575 │ 29MB   │
│ localhost/minikube-local-cache-test     │ functional-643319                     │ a4b0dba10d4a1 │ 3.33kB │
│ public.ecr.aws/nginx/nginx              │ alpine                                │ 10afed3caf3ee │ 55.1MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1                               │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.3                               │ 7ada8ff13e54b │ 72.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.3                               │ 2f2aa21d34d2d │ 51.6MB │
│ docker.io/kicbase/echo-server           │ latest                                │ ce2d2cda2d858 │ 4.79MB │
│ localhost/kicbase/echo-server           │ functional-643319                     │ ce2d2cda2d858 │ 4.79MB │
│ docker.io/kindest/kindnetd              │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ c96ee3c174987 │ 108MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc                          │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/pause                   │ 3.10.1                                │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ latest                                │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/kube-proxy              │ v1.34.3                               │ 4461daf6b6af8 │ 75.9MB │
└─────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-643319 image ls --format table --alsologtostderr:
I1217 20:21:13.168805  516367 out.go:360] Setting OutFile to fd 1 ...
I1217 20:21:13.168986  516367 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:21:13.169017  516367 out.go:374] Setting ErrFile to fd 2...
I1217 20:21:13.169042  516367 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:21:13.169305  516367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
I1217 20:21:13.169973  516367 config.go:182] Loaded profile config "functional-643319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 20:21:13.170139  516367 config.go:182] Loaded profile config "functional-643319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 20:21:13.170697  516367 cli_runner.go:164] Run: docker container inspect functional-643319 --format={{.State.Status}}
I1217 20:21:13.196214  516367 ssh_runner.go:195] Run: systemctl --version
I1217 20:21:13.196270  516367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-643319
I1217 20:21:13.216182  516367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-643319/id_rsa Username:docker}
I1217 20:21:13.314290  516367 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-643319 image ls --format json --alsologtostderr:
[{"id":"c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae","docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"108362109"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460","registry.k8s.io/kube-apiserver@sha256:6fa1e54cee33473ab964d87ea
870ccf4ac9e6e4012b6d73160fcc3a99c7be9b5"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"84818927"},{"id":"7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:49437795b4edd6ed8ada141b20cf576fb0aa4e84b82d6a25af841ed293abece1","registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"72629077"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e272317
3d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-643319"],"size":"4788229"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@
sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4","repoDigests":["public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d","public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55077248"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"4461daf6b6af87cf200fc22cecc
9a2120959aabaf5712ba54ef5b4a6361d1162","repoDigests":["registry.k8s.io/kube-proxy@sha256:5c52b97ed657a0a1ef3c24e25d953fcca37fa200f3ec98938c254d748008dd86","registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"75941783"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/met
rics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d04
6a8b4ba1e29421a6","repoDigests":["registry.k8s.io/kube-scheduler@sha256:7f3d992e0f2cb23d075ddafc8c73b5bdcf0ebc01098ef92965cc371eabcb9611","registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"51592021"},{"id":"a4b0dba10d4a1d351a74a10b8dbd7196f37bb5aac64f0d7c69e0effc55b1fd0a","repoDigests":["localhost/minikube-local-cache-test@sha256:26d297b0745547aa1b45ef43e4465581dc07f25322dfaafe7d81e738e53b7936"],"repoTags":["localhost/minikube-local-cache-test:functional-643319"],"size":"3330"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-643319 image ls --format json --alsologtostderr:
I1217 20:21:12.937140  516330 out.go:360] Setting OutFile to fd 1 ...
I1217 20:21:12.937340  516330 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:21:12.937367  516330 out.go:374] Setting ErrFile to fd 2...
I1217 20:21:12.937387  516330 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:21:12.937687  516330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
I1217 20:21:12.938435  516330 config.go:182] Loaded profile config "functional-643319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 20:21:12.938612  516330 config.go:182] Loaded profile config "functional-643319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 20:21:12.939192  516330 cli_runner.go:164] Run: docker container inspect functional-643319 --format={{.State.Status}}
I1217 20:21:12.962980  516330 ssh_runner.go:195] Run: systemctl --version
I1217 20:21:12.963033  516330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-643319
I1217 20:21:12.981181  516330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-643319/id_rsa Username:docker}
I1217 20:21:13.078714  516330 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-643319 image ls --format yaml --alsologtostderr:
- id: c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
- docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "108362109"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55077248"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-643319
size: "4788229"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460
- registry.k8s.io/kube-apiserver@sha256:6fa1e54cee33473ab964d87ea870ccf4ac9e6e4012b6d73160fcc3a99c7be9b5
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "84818927"
- id: 7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:49437795b4edd6ed8ada141b20cf576fb0aa4e84b82d6a25af841ed293abece1
- registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "72629077"
- id: 4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162
repoDigests:
- registry.k8s.io/kube-proxy@sha256:5c52b97ed657a0a1ef3c24e25d953fcca37fa200f3ec98938c254d748008dd86
- registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "75941783"
- id: 2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:7f3d992e0f2cb23d075ddafc8c73b5bdcf0ebc01098ef92965cc371eabcb9611
- registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "51592021"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: a4b0dba10d4a1d351a74a10b8dbd7196f37bb5aac64f0d7c69e0effc55b1fd0a
repoDigests:
- localhost/minikube-local-cache-test@sha256:26d297b0745547aa1b45ef43e4465581dc07f25322dfaafe7d81e738e53b7936
repoTags:
- localhost/minikube-local-cache-test:functional-643319
size: "3330"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-643319 image ls --format yaml --alsologtostderr:
I1217 20:21:11.064432  516165 out.go:360] Setting OutFile to fd 1 ...
I1217 20:21:11.064584  516165 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:21:11.064592  516165 out.go:374] Setting ErrFile to fd 2...
I1217 20:21:11.064597  516165 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:21:11.064877  516165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
I1217 20:21:11.065485  516165 config.go:182] Loaded profile config "functional-643319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 20:21:11.065589  516165 config.go:182] Loaded profile config "functional-643319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 20:21:11.066111  516165 cli_runner.go:164] Run: docker container inspect functional-643319 --format={{.State.Status}}
I1217 20:21:11.104109  516165 ssh_runner.go:195] Run: systemctl --version
I1217 20:21:11.104160  516165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-643319
I1217 20:21:11.128262  516165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-643319/id_rsa Username:docker}
I1217 20:21:11.230194  516165 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-643319 ssh pgrep buildkitd: exit status 1 (362.768264ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 image build -t localhost/my-image:functional-643319 testdata/build --alsologtostderr
2025/12/17 20:21:12 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-643319 image build -t localhost/my-image:functional-643319 testdata/build --alsologtostderr: (3.463120454s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-643319 image build -t localhost/my-image:functional-643319 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2ae9ae55572
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-643319
--> ea84d6b92a8
Successfully tagged localhost/my-image:functional-643319
ea84d6b92a84a13e7a50a23f6673bf527c8fb7f3326eeb0a328f698b5107c924
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-643319 image build -t localhost/my-image:functional-643319 testdata/build --alsologtostderr:
I1217 20:21:11.705562  516274 out.go:360] Setting OutFile to fd 1 ...
I1217 20:21:11.711679  516274 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:21:11.711720  516274 out.go:374] Setting ErrFile to fd 2...
I1217 20:21:11.711728  516274 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:21:11.712080  516274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
I1217 20:21:11.712839  516274 config.go:182] Loaded profile config "functional-643319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 20:21:11.713552  516274 config.go:182] Loaded profile config "functional-643319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 20:21:11.714110  516274 cli_runner.go:164] Run: docker container inspect functional-643319 --format={{.State.Status}}
I1217 20:21:11.732200  516274 ssh_runner.go:195] Run: systemctl --version
I1217 20:21:11.732259  516274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-643319
I1217 20:21:11.749744  516274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-643319/id_rsa Username:docker}
I1217 20:21:11.846442  516274 build_images.go:162] Building image from path: /tmp/build.2772651637.tar
I1217 20:21:11.846511  516274 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 20:21:11.854530  516274 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2772651637.tar
I1217 20:21:11.858423  516274 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2772651637.tar: stat -c "%s %y" /var/lib/minikube/build/build.2772651637.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2772651637.tar': No such file or directory
I1217 20:21:11.858453  516274 ssh_runner.go:362] scp /tmp/build.2772651637.tar --> /var/lib/minikube/build/build.2772651637.tar (3072 bytes)
I1217 20:21:11.877892  516274 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2772651637
I1217 20:21:11.885618  516274 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2772651637 -xf /var/lib/minikube/build/build.2772651637.tar
I1217 20:21:11.893785  516274 crio.go:315] Building image: /var/lib/minikube/build/build.2772651637
I1217 20:21:11.893880  516274 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-643319 /var/lib/minikube/build/build.2772651637 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1217 20:21:15.068790  516274 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-643319 /var/lib/minikube/build/build.2772651637 --cgroup-manager=cgroupfs: (3.174880189s)
I1217 20:21:15.068860  516274 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2772651637
I1217 20:21:15.077704  516274 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2772651637.tar
I1217 20:21:15.086225  516274 build_images.go:218] Built localhost/my-image:functional-643319 from /tmp/build.2772651637.tar
I1217 20:21:15.086260  516274 build_images.go:134] succeeded building to: functional-643319
I1217 20:21:15.086266  516274 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-643319
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 image load --daemon kicbase/echo-server:functional-643319 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-643319 image load --daemon kicbase/echo-server:functional-643319 --alsologtostderr: (1.282776828s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 image load --daemon kicbase/echo-server:functional-643319 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-643319
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 image load --daemon kicbase/echo-server:functional-643319 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-643319 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-643319 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-643319 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-643319 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 512002: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-643319 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-643319 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [e1b57bf6-4a35-4b0e-9aab-d7410504ccf8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [e1b57bf6-4a35-4b0e-9aab-d7410504ccf8] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004015252s
I1217 20:20:41.288920  488412 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 image save kicbase/echo-server:functional-643319 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 image rm kicbase/echo-server:functional-643319 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-arm64 -p functional-643319 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr: (1.190565957s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-643319
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 image save --daemon kicbase/echo-server:functional-643319 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-643319
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-643319 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.59.156 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-643319 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-643319 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-643319 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-swvjg" [ae8e81ec-757d-4de0-8b8a-cb49397cd979] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-swvjg" [ae8e81ec-757d-4de0-8b8a-cb49397cd979] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004701018s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "375.316531ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "64.643868ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "372.553515ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "73.027126ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-643319 /tmp/TestFunctionalparallelMountCmdany-port1796123597/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1766002856571698066" to /tmp/TestFunctionalparallelMountCmdany-port1796123597/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1766002856571698066" to /tmp/TestFunctionalparallelMountCmdany-port1796123597/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1766002856571698066" to /tmp/TestFunctionalparallelMountCmdany-port1796123597/001/test-1766002856571698066
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-643319 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (476.581206ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 20:20:57.048537  488412 retry.go:31] will retry after 736.271821ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 20:20 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 20:20 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 20:20 test-1766002856571698066
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh cat /mount-9p/test-1766002856571698066
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-643319 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [65dd6c64-cde7-4b1f-b805-0ebca655adde] Pending
helpers_test.go:353: "busybox-mount" [65dd6c64-cde7-4b1f-b805-0ebca655adde] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [65dd6c64-cde7-4b1f-b805-0ebca655adde] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [65dd6c64-cde7-4b1f-b805-0ebca655adde] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003615825s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-643319 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-643319 /tmp/TestFunctionalparallelMountCmdany-port1796123597/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 service list -o json
functional_test.go:1504: Took "550.079738ms" to run "out/minikube-linux-arm64 -p functional-643319 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30485
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30485
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-643319 /tmp/TestFunctionalparallelMountCmdspecific-port2571748949/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-643319 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (472.179438ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 20:21:04.989323  488412 retry.go:31] will retry after 325.677351ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-643319 /tmp/TestFunctionalparallelMountCmdspecific-port2571748949/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-643319 ssh "sudo umount -f /mount-9p": exit status 1 (318.9496ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-643319 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-643319 /tmp/TestFunctionalparallelMountCmdspecific-port2571748949/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-643319 /tmp/TestFunctionalparallelMountCmdVerifyCleanup781964025/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-643319 /tmp/TestFunctionalparallelMountCmdVerifyCleanup781964025/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-643319 /tmp/TestFunctionalparallelMountCmdVerifyCleanup781964025/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-643319 ssh "findmnt -T" /mount1: exit status 1 (939.138743ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 20:21:07.559322  488412 retry.go:31] will retry after 384.976973ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-643319 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-643319 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-643319 /tmp/TestFunctionalparallelMountCmdVerifyCleanup781964025/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-643319 /tmp/TestFunctionalparallelMountCmdVerifyCleanup781964025/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-643319 /tmp/TestFunctionalparallelMountCmdVerifyCleanup781964025/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.53s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-643319
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-643319
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-643319
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21808-485134/.minikube/files/etc/test/nested/copy/488412/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-655452 cache add registry.k8s.io/pause:3.1: (1.216964686s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-655452 cache add registry.k8s.io/pause:3.3: (1.201984664s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-655452 cache add registry.k8s.io/pause:latest: (1.17277885s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC2997968127/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 cache add minikube-local-cache-test:functional-655452
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 cache delete minikube-local-cache-test:functional-655452
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-655452
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-655452 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (295.347162ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (0.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (0.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi1841607707/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-655452 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi1841607707/001/logs.txt: (1.039730866s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-655452 config get cpus: exit status 14 (61.794189ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-655452 config get cpus: exit status 14 (66.647121ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-655452 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-655452 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (182.99672ms)

                                                
                                                
-- stdout --
	* [functional-655452] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:50:14.231475  545956 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:50:14.231666  545956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:50:14.231678  545956 out.go:374] Setting ErrFile to fd 2...
	I1217 20:50:14.231683  545956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:50:14.231929  545956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:50:14.232319  545956 out.go:368] Setting JSON to false
	I1217 20:50:14.233199  545956 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12764,"bootTime":1765991851,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:50:14.233279  545956 start.go:143] virtualization:  
	I1217 20:50:14.236556  545956 out.go:179] * [functional-655452] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 20:50:14.240270  545956 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:50:14.240428  545956 notify.go:221] Checking for updates...
	I1217 20:50:14.245896  545956 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:50:14.248814  545956 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:50:14.251624  545956 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:50:14.254458  545956 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:50:14.257492  545956 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:50:14.260797  545956 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:50:14.261435  545956 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:50:14.290988  545956 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:50:14.291127  545956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:50:14.344820  545956 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 20:50:14.33537501 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:50:14.344939  545956 docker.go:319] overlay module found
	I1217 20:50:14.348152  545956 out.go:179] * Using the docker driver based on existing profile
	I1217 20:50:14.351073  545956 start.go:309] selected driver: docker
	I1217 20:50:14.351096  545956 start.go:927] validating driver "docker" against &{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:50:14.351211  545956 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:50:14.354712  545956 out.go:203] 
	W1217 20:50:14.357649  545956 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 20:50:14.360441  545956 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-655452 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-655452 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-655452 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (246.795772ms)

                                                
                                                
-- stdout --
	* [functional-655452] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:50:13.991982  545904 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:50:13.992123  545904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:50:13.992134  545904 out.go:374] Setting ErrFile to fd 2...
	I1217 20:50:13.992139  545904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:50:13.992521  545904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:50:13.992939  545904 out.go:368] Setting JSON to false
	I1217 20:50:13.993814  545904 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12763,"bootTime":1765991851,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1217 20:50:13.993886  545904 start.go:143] virtualization:  
	I1217 20:50:13.997516  545904 out.go:179] * [functional-655452] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1217 20:50:14.001431  545904 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 20:50:14.004834  545904 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:50:14.008170  545904 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	I1217 20:50:14.008177  545904 notify.go:221] Checking for updates...
	I1217 20:50:14.011431  545904 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	I1217 20:50:14.014625  545904 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 20:50:14.017549  545904 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:50:14.020997  545904 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:50:14.021632  545904 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:50:14.046471  545904 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 20:50:14.046605  545904 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:50:14.162179  545904 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 20:50:14.152358293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:50:14.162293  545904 docker.go:319] overlay module found
	I1217 20:50:14.165400  545904 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 20:50:14.168146  545904 start.go:309] selected driver: docker
	I1217 20:50:14.168163  545904 start.go:927] validating driver "docker" against &{Name:functional-655452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-655452 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:50:14.168274  545904 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:50:14.171756  545904 out.go:203] 
	W1217 20:50:14.174669  545904 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 20:50:14.177495  545904 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (2.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh -n functional-655452 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 cp functional-655452:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm2932922172/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh -n functional-655452 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh -n functional-655452 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (2.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/488412/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "sudo cat /etc/test/nested/copy/488412/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/488412.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "sudo cat /etc/ssl/certs/488412.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/488412.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "sudo cat /usr/share/ca-certificates/488412.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4884122.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "sudo cat /etc/ssl/certs/4884122.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4884122.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "sudo cat /usr/share/ca-certificates/4884122.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-655452 ssh "sudo systemctl is-active docker": exit status 1 (275.120038ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-655452 ssh "sudo systemctl is-active containerd": exit status 1 (267.778262ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-655452 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-655452 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "356.612854ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "57.420322ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "325.453664ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "60.903613ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (2.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun191240204/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-655452 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (359.554722ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 20:50:06.953996  488412 retry.go:31] will retry after 746.526398ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun191240204/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-655452 ssh "sudo umount -f /mount-9p": exit status 1 (267.342813ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-655452 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun191240204/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (2.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (2.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun309516247/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun309516247/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun309516247/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-655452 ssh "findmnt -T" /mount1: exit status 1 (695.500146ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 20:50:09.444877  488412 retry.go:31] will retry after 627.622509ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-655452 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun309516247/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun309516247/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-655452 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun309516247/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (2.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-655452 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-655452
localhost/kicbase/echo-server:functional-655452
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-655452 image ls --format short --alsologtostderr:
I1217 20:50:26.461373  548094 out.go:360] Setting OutFile to fd 1 ...
I1217 20:50:26.461700  548094 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:50:26.461736  548094 out.go:374] Setting ErrFile to fd 2...
I1217 20:50:26.461756  548094 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:50:26.462070  548094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
I1217 20:50:26.462739  548094 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 20:50:26.462919  548094 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 20:50:26.463465  548094 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
I1217 20:50:26.483326  548094 ssh_runner.go:195] Run: systemctl --version
I1217 20:50:26.483382  548094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
I1217 20:50:26.502173  548094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
I1217 20:50:26.594396  548094 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-655452 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager │ v1.35.0-rc.1       │ a34b3483f25ba │ 72.2MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-rc.1       │ abca4d5226620 │ 49.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/etcd                    │ 3.6.6-0            │ 271e49a0ebc56 │ 60.9MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-rc.1       │ 3c6ba27e07aef │ 85MB   │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ localhost/minikube-local-cache-test     │ functional-655452  │ a4b0dba10d4a1 │ 3.33kB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-rc.1       │ 7e3acea3d87aa │ 74.1MB │
│ localhost/my-image                      │ functional-655452  │ 5e7b46ffc6962 │ 1.64MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ localhost/kicbase/echo-server           │ functional-655452  │ ce2d2cda2d858 │ 4.79MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-655452 image ls --format table --alsologtostderr:
I1217 20:50:31.106213  548628 out.go:360] Setting OutFile to fd 1 ...
I1217 20:50:31.106554  548628 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:50:31.106588  548628 out.go:374] Setting ErrFile to fd 2...
I1217 20:50:31.106611  548628 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:50:31.106922  548628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
I1217 20:50:31.107693  548628 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 20:50:31.107914  548628 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 20:50:31.108457  548628 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
I1217 20:50:31.126467  548628 ssh_runner.go:195] Run: systemctl --version
I1217 20:50:31.126542  548628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
I1217 20:50:31.143853  548628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
I1217 20:50:31.238252  548628 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 image ls --format json --alsologtostderr
E1217 20:50:30.851159  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-655452 image ls --format json --alsologtostderr:
[{"id":"a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f","registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"72170325"},{"id":"7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e","repoDigests":["registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f","registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"74107287"},{"id":"abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde","repoDigests":["registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3","registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a
60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"49822549"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"90a6c424d3b91bf334276571fe4b586f4101490a35e7396064615a375b4b81db","repoDigests":["docker.io/library/ddbac02f0ce6284520bc135d1e421de5b358eae165e7903977e72d1ca670363e-tmp@sha256:58e208d27605f9acd18c63be38054c89b837d9251125906e57269a8e64f37306"],"repoTags":[],"size":"1638179"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"
ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-655452"],"size":"4788229"},{"id":"a4b0dba10d4a1d351a74a10b8dbd7196f37bb5aac64f0d7c69e0effc55b1fd0a","repoDigests":["localhost/minikube-local-cache-test@sha256:26d297b0745547aa1b45ef43e4465581dc07f25322dfaafe7d81e738e53b7936"],"repoTags":["localhost/minikube-local-cache-test:functional-655452"],"size":"3330"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54","repoDigests":["registry.k
8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee","registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"85015535"},{"id":"5e7b46ffc696245c3da091d982d83709785d2ef9621c4d8a5ba7309cfb56185e","repoDigests":["localhost/my-image@sha256:b464872b3aea97e9eba06f1765a5d4456598f7a5c80c554d5d2f3722e0f96779"],"repoTags":["localhost/my-image:functional-655452"],"size":"1640791"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47e
c28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":["registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890","registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"60850387"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":[
"registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-655452 image ls --format json --alsologtostderr:
I1217 20:50:30.883247  548592 out.go:360] Setting OutFile to fd 1 ...
I1217 20:50:30.883480  548592 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:50:30.883495  548592 out.go:374] Setting ErrFile to fd 2...
I1217 20:50:30.883501  548592 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:50:30.883831  548592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
I1217 20:50:30.884527  548592 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 20:50:30.884702  548592 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 20:50:30.885289  548592 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
I1217 20:50:30.905818  548592 ssh_runner.go:195] Run: systemctl --version
I1217 20:50:30.905888  548592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
I1217 20:50:30.924143  548592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
I1217 20:50:31.019416  548592 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-655452 image ls --format yaml --alsologtostderr:
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3
- registry.k8s.io/kube-scheduler@sha256:9ac9664e74153a60bf2c27af77561abc33d85a716a48893c7e50ad356adc4ea0
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "49822549"
- id: 90a6c424d3b91bf334276571fe4b586f4101490a35e7396064615a375b4b81db
repoDigests:
- docker.io/library/ddbac02f0ce6284520bc135d1e421de5b358eae165e7903977e72d1ca670363e-tmp@sha256:58e208d27605f9acd18c63be38054c89b837d9251125906e57269a8e64f37306
repoTags: []
size: "1638179"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-655452
size: "4788229"
- id: 5e7b46ffc696245c3da091d982d83709785d2ef9621c4d8a5ba7309cfb56185e
repoDigests:
- localhost/my-image@sha256:b464872b3aea97e9eba06f1765a5d4456598f7a5c80c554d5d2f3722e0f96779
repoTags:
- localhost/my-image:functional-655452
size: "1640791"
- id: 3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee
- registry.k8s.io/kube-apiserver@sha256:e6ee3594f9ff061c53d6721bc04b810ec4227e28da3bd98e59206d552d45cde8
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "85015535"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:42360249c0c729ed0542bc8e4a6cd9ba4df358a4e5a140f6c24d5f966ee5121f
- registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "72170325"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1634527"
- id: a4b0dba10d4a1d351a74a10b8dbd7196f37bb5aac64f0d7c69e0effc55b1fd0a
repoDigests:
- localhost/minikube-local-cache-test@sha256:26d297b0745547aa1b45ef43e4465581dc07f25322dfaafe7d81e738e53b7936
repoTags:
- localhost/minikube-local-cache-test:functional-655452
size: "3330"
- id: 271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
- registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "60850387"
- id: 7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:709cbcd809826ad98b553d8e283a04db70fa653526d1c2a5e1b50000701b2b6f
- registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "74107287"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-655452 image ls --format yaml --alsologtostderr:
I1217 20:50:30.659488  548556 out.go:360] Setting OutFile to fd 1 ...
I1217 20:50:30.659749  548556 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:50:30.659783  548556 out.go:374] Setting ErrFile to fd 2...
I1217 20:50:30.659804  548556 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:50:30.660102  548556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
I1217 20:50:30.660798  548556 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 20:50:30.660981  548556 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 20:50:30.661591  548556 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
I1217 20:50:30.679811  548556 ssh_runner.go:195] Run: systemctl --version
I1217 20:50:30.679865  548556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
I1217 20:50:30.697544  548556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
I1217 20:50:30.790581  548556 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-655452 ssh pgrep buildkitd: exit status 1 (281.962368ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 image build -t localhost/my-image:functional-655452 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-655452 image build -t localhost/my-image:functional-655452 testdata/build --alsologtostderr: (3.299820281s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-655452 image build -t localhost/my-image:functional-655452 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 90a6c424d3b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-655452
--> 5e7b46ffc69
Successfully tagged localhost/my-image:functional-655452
5e7b46ffc696245c3da091d982d83709785d2ef9621c4d8a5ba7309cfb56185e
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-655452 image build -t localhost/my-image:functional-655452 testdata/build --alsologtostderr:
I1217 20:50:27.118215  548238 out.go:360] Setting OutFile to fd 1 ...
I1217 20:50:27.118570  548238 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:50:27.118582  548238 out.go:374] Setting ErrFile to fd 2...
I1217 20:50:27.118587  548238 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:50:27.119303  548238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
I1217 20:50:27.120489  548238 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 20:50:27.121706  548238 config.go:182] Loaded profile config "functional-655452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 20:50:27.122554  548238 cli_runner.go:164] Run: docker container inspect functional-655452 --format={{.State.Status}}
I1217 20:50:27.140586  548238 ssh_runner.go:195] Run: systemctl --version
I1217 20:50:27.140642  548238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-655452
I1217 20:50:27.158018  548238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/functional-655452/id_rsa Username:docker}
I1217 20:50:27.250397  548238 build_images.go:162] Building image from path: /tmp/build.1575216377.tar
I1217 20:50:27.250501  548238 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 20:50:27.258694  548238 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1575216377.tar
I1217 20:50:27.262493  548238 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1575216377.tar: stat -c "%s %y" /var/lib/minikube/build/build.1575216377.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1575216377.tar': No such file or directory
I1217 20:50:27.262525  548238 ssh_runner.go:362] scp /tmp/build.1575216377.tar --> /var/lib/minikube/build/build.1575216377.tar (3072 bytes)
I1217 20:50:27.280932  548238 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1575216377
I1217 20:50:27.289809  548238 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1575216377 -xf /var/lib/minikube/build/build.1575216377.tar
I1217 20:50:27.298364  548238 crio.go:315] Building image: /var/lib/minikube/build/build.1575216377
I1217 20:50:27.298441  548238 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-655452 /var/lib/minikube/build/build.1575216377 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1217 20:50:30.340586  548238 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-655452 /var/lib/minikube/build/build.1575216377 --cgroup-manager=cgroupfs: (3.042116627s)
I1217 20:50:30.340661  548238 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1575216377
I1217 20:50:30.348963  548238 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1575216377.tar
I1217 20:50:30.357178  548238 build_images.go:218] Built localhost/my-image:functional-655452 from /tmp/build.1575216377.tar
I1217 20:50:30.357215  548238 build_images.go:134] succeeded building to: functional-655452
I1217 20:50:30.357222  548238 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-655452
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 image load --daemon kicbase/echo-server:functional-655452 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 image load --daemon kicbase/echo-server:functional-655452 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-655452
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 image load --daemon kicbase/echo-server:functional-655452 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 image save kicbase/echo-server:functional-655452 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 image rm kicbase/echo-server:functional-655452 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-655452
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 image save --daemon kicbase/echo-server:functional-655452 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-655452
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-655452 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-655452
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-655452
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-655452
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (148.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1217 20:53:21.928562  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:53:21.934950  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:53:21.946313  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:53:21.967713  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:53:22.009927  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:53:22.091326  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:53:22.252720  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:53:22.574108  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:53:23.216127  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:53:24.497368  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:53:27.059242  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:53:32.181001  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:53:42.422917  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:53:56.661706  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:54:02.904194  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:54:43.865486  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-148567 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m28.001930684s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (148.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-148567 kubectl -- rollout status deployment/busybox: (4.023478728s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 kubectl -- exec busybox-7b57f96db7-d5rt7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 kubectl -- exec busybox-7b57f96db7-lc5vz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 kubectl -- exec busybox-7b57f96db7-wpzp9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 kubectl -- exec busybox-7b57f96db7-d5rt7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 kubectl -- exec busybox-7b57f96db7-lc5vz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 kubectl -- exec busybox-7b57f96db7-wpzp9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 kubectl -- exec busybox-7b57f96db7-d5rt7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 kubectl -- exec busybox-7b57f96db7-lc5vz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 kubectl -- exec busybox-7b57f96db7-wpzp9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 kubectl -- exec busybox-7b57f96db7-d5rt7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 kubectl -- exec busybox-7b57f96db7-d5rt7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 kubectl -- exec busybox-7b57f96db7-lc5vz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 kubectl -- exec busybox-7b57f96db7-lc5vz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 kubectl -- exec busybox-7b57f96db7-wpzp9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 kubectl -- exec busybox-7b57f96db7-wpzp9 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (30.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 node add --alsologtostderr -v 5
E1217 20:55:30.851807  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-148567 node add --alsologtostderr -v 5: (29.466066506s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-148567 status --alsologtostderr -v 5: (1.092886029s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (30.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-148567 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.075169926s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-148567 status --output json --alsologtostderr -v 5: (1.043813683s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 cp testdata/cp-test.txt ha-148567:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 cp ha-148567:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3374009435/001/cp-test_ha-148567.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 cp ha-148567:/home/docker/cp-test.txt ha-148567-m02:/home/docker/cp-test_ha-148567_ha-148567-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m02 "sudo cat /home/docker/cp-test_ha-148567_ha-148567-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 cp ha-148567:/home/docker/cp-test.txt ha-148567-m03:/home/docker/cp-test_ha-148567_ha-148567-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m03 "sudo cat /home/docker/cp-test_ha-148567_ha-148567-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 cp ha-148567:/home/docker/cp-test.txt ha-148567-m04:/home/docker/cp-test_ha-148567_ha-148567-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m04 "sudo cat /home/docker/cp-test_ha-148567_ha-148567-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 cp testdata/cp-test.txt ha-148567-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 cp ha-148567-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3374009435/001/cp-test_ha-148567-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 cp ha-148567-m02:/home/docker/cp-test.txt ha-148567:/home/docker/cp-test_ha-148567-m02_ha-148567.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567 "sudo cat /home/docker/cp-test_ha-148567-m02_ha-148567.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 cp ha-148567-m02:/home/docker/cp-test.txt ha-148567-m03:/home/docker/cp-test_ha-148567-m02_ha-148567-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m03 "sudo cat /home/docker/cp-test_ha-148567-m02_ha-148567-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 cp ha-148567-m02:/home/docker/cp-test.txt ha-148567-m04:/home/docker/cp-test_ha-148567-m02_ha-148567-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m04 "sudo cat /home/docker/cp-test_ha-148567-m02_ha-148567-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 cp testdata/cp-test.txt ha-148567-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 cp ha-148567-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3374009435/001/cp-test_ha-148567-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 cp ha-148567-m03:/home/docker/cp-test.txt ha-148567:/home/docker/cp-test_ha-148567-m03_ha-148567.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567 "sudo cat /home/docker/cp-test_ha-148567-m03_ha-148567.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 cp ha-148567-m03:/home/docker/cp-test.txt ha-148567-m02:/home/docker/cp-test_ha-148567-m03_ha-148567-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m02 "sudo cat /home/docker/cp-test_ha-148567-m03_ha-148567-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 cp ha-148567-m03:/home/docker/cp-test.txt ha-148567-m04:/home/docker/cp-test_ha-148567-m03_ha-148567-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m04 "sudo cat /home/docker/cp-test_ha-148567-m03_ha-148567-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 cp testdata/cp-test.txt ha-148567-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 cp ha-148567-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3374009435/001/cp-test_ha-148567-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 cp ha-148567-m04:/home/docker/cp-test.txt ha-148567:/home/docker/cp-test_ha-148567-m04_ha-148567.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567 "sudo cat /home/docker/cp-test_ha-148567-m04_ha-148567.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 cp ha-148567-m04:/home/docker/cp-test.txt ha-148567-m02:/home/docker/cp-test_ha-148567-m04_ha-148567-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m02 "sudo cat /home/docker/cp-test_ha-148567-m04_ha-148567-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 cp ha-148567-m04:/home/docker/cp-test.txt ha-148567-m03:/home/docker/cp-test_ha-148567-m04_ha-148567-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 ssh -n ha-148567-m03 "sudo cat /home/docker/cp-test_ha-148567-m04_ha-148567-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 node stop m02 --alsologtostderr -v 5
E1217 20:56:05.790166  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-148567 node stop m02 --alsologtostderr -v 5: (12.058193289s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-148567 status --alsologtostderr -v 5: exit status 7 (783.61288ms)

                                                
                                                
-- stdout --
	ha-148567
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-148567-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-148567-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-148567-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:56:07.395721  564705 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:56:07.395850  564705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:56:07.395871  564705 out.go:374] Setting ErrFile to fd 2...
	I1217 20:56:07.395877  564705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:56:07.396204  564705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 20:56:07.396441  564705 out.go:368] Setting JSON to false
	I1217 20:56:07.396472  564705 mustload.go:66] Loading cluster: ha-148567
	I1217 20:56:07.396939  564705 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:56:07.396956  564705 status.go:174] checking status of ha-148567 ...
	I1217 20:56:07.397460  564705 notify.go:221] Checking for updates...
	I1217 20:56:07.397594  564705 cli_runner.go:164] Run: docker container inspect ha-148567 --format={{.State.Status}}
	I1217 20:56:07.426543  564705 status.go:371] ha-148567 host status = "Running" (err=<nil>)
	I1217 20:56:07.426615  564705 host.go:66] Checking if "ha-148567" exists ...
	I1217 20:56:07.426947  564705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567
	I1217 20:56:07.456042  564705 host.go:66] Checking if "ha-148567" exists ...
	I1217 20:56:07.456354  564705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:56:07.456399  564705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567
	I1217 20:56:07.480728  564705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567/id_rsa Username:docker}
	I1217 20:56:07.594912  564705 ssh_runner.go:195] Run: systemctl --version
	I1217 20:56:07.601887  564705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:56:07.616879  564705 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:56:07.684147  564705 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-17 20:56:07.672613753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 20:56:07.684729  564705 kubeconfig.go:125] found "ha-148567" server: "https://192.168.49.254:8443"
	I1217 20:56:07.684760  564705 api_server.go:166] Checking apiserver status ...
	I1217 20:56:07.684813  564705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:56:07.697397  564705 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1309/cgroup
	I1217 20:56:07.706229  564705 api_server.go:182] apiserver freezer: "2:freezer:/docker/88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08/crio/crio-519856b70282fb63226be78879ba517420b48e0593149cbebc6373235799f70e"
	I1217 20:56:07.706316  564705 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/88230c4afd3a17f60bc70f3b880924c24773c5848052be68927148210c5cca08/crio/crio-519856b70282fb63226be78879ba517420b48e0593149cbebc6373235799f70e/freezer.state
	I1217 20:56:07.714232  564705 api_server.go:204] freezer state: "THAWED"
	I1217 20:56:07.714265  564705 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1217 20:56:07.724650  564705 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1217 20:56:07.724685  564705 status.go:463] ha-148567 apiserver status = Running (err=<nil>)
	I1217 20:56:07.724697  564705 status.go:176] ha-148567 status: &{Name:ha-148567 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 20:56:07.724725  564705 status.go:174] checking status of ha-148567-m02 ...
	I1217 20:56:07.725035  564705 cli_runner.go:164] Run: docker container inspect ha-148567-m02 --format={{.State.Status}}
	I1217 20:56:07.742380  564705 status.go:371] ha-148567-m02 host status = "Stopped" (err=<nil>)
	I1217 20:56:07.742403  564705 status.go:384] host is not running, skipping remaining checks
	I1217 20:56:07.742410  564705 status.go:176] ha-148567-m02 status: &{Name:ha-148567-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 20:56:07.742432  564705 status.go:174] checking status of ha-148567-m03 ...
	I1217 20:56:07.742989  564705 cli_runner.go:164] Run: docker container inspect ha-148567-m03 --format={{.State.Status}}
	I1217 20:56:07.761250  564705 status.go:371] ha-148567-m03 host status = "Running" (err=<nil>)
	I1217 20:56:07.761278  564705 host.go:66] Checking if "ha-148567-m03" exists ...
	I1217 20:56:07.761578  564705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m03
	I1217 20:56:07.781372  564705 host.go:66] Checking if "ha-148567-m03" exists ...
	I1217 20:56:07.781785  564705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:56:07.781832  564705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m03
	I1217 20:56:07.800308  564705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m03/id_rsa Username:docker}
	I1217 20:56:07.898001  564705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:56:07.913587  564705 kubeconfig.go:125] found "ha-148567" server: "https://192.168.49.254:8443"
	I1217 20:56:07.913620  564705 api_server.go:166] Checking apiserver status ...
	I1217 20:56:07.913685  564705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:56:07.926503  564705 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1249/cgroup
	I1217 20:56:07.937041  564705 api_server.go:182] apiserver freezer: "2:freezer:/docker/04b64266bdd1f39325cf09cbc08b7c2a416813e3c7e7e9a7ccf576862f68f5fb/crio/crio-e1565479b65e855ae6f20ddb535094caa8a204a6cf03c8553e865002d7e1fc83"
	I1217 20:56:07.937149  564705 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/04b64266bdd1f39325cf09cbc08b7c2a416813e3c7e7e9a7ccf576862f68f5fb/crio/crio-e1565479b65e855ae6f20ddb535094caa8a204a6cf03c8553e865002d7e1fc83/freezer.state
	I1217 20:56:07.946301  564705 api_server.go:204] freezer state: "THAWED"
	I1217 20:56:07.946373  564705 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1217 20:56:07.954522  564705 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1217 20:56:07.954551  564705 status.go:463] ha-148567-m03 apiserver status = Running (err=<nil>)
	I1217 20:56:07.954562  564705 status.go:176] ha-148567-m03 status: &{Name:ha-148567-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 20:56:07.954580  564705 status.go:174] checking status of ha-148567-m04 ...
	I1217 20:56:07.954930  564705 cli_runner.go:164] Run: docker container inspect ha-148567-m04 --format={{.State.Status}}
	I1217 20:56:07.972330  564705 status.go:371] ha-148567-m04 host status = "Running" (err=<nil>)
	I1217 20:56:07.972357  564705 host.go:66] Checking if "ha-148567-m04" exists ...
	I1217 20:56:07.972649  564705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148567-m04
	I1217 20:56:07.992320  564705 host.go:66] Checking if "ha-148567-m04" exists ...
	I1217 20:56:07.992644  564705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:56:07.992690  564705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148567-m04
	I1217 20:56:08.012297  564705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/ha-148567-m04/id_rsa Username:docker}
	I1217 20:56:08.105117  564705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:56:08.122816  564705 status.go:176] ha-148567-m04 status: &{Name:ha-148567-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (33.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-148567 node start m02 --alsologtostderr -v 5: (31.865361837s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-148567 status --alsologtostderr -v 5: (1.16421493s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (33.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.40235653s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 node delete m03 --alsologtostderr -v 5
E1217 21:03:56.661357  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-148567 node delete m03 --alsologtostderr -v 5: (11.014020605s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-148567 stop --alsologtostderr -v 5: (35.941302019s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-148567 status --alsologtostderr -v 5: exit status 7 (124.239269ms)

                                                
                                                
-- stdout --
	ha-148567
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-148567-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-148567-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 21:04:47.177111  577845 out.go:360] Setting OutFile to fd 1 ...
	I1217 21:04:47.177310  577845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:04:47.177345  577845 out.go:374] Setting ErrFile to fd 2...
	I1217 21:04:47.177365  577845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:04:47.177691  577845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 21:04:47.177953  577845 out.go:368] Setting JSON to false
	I1217 21:04:47.178026  577845 mustload.go:66] Loading cluster: ha-148567
	I1217 21:04:47.178098  577845 notify.go:221] Checking for updates...
	I1217 21:04:47.179163  577845 config.go:182] Loaded profile config "ha-148567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 21:04:47.179218  577845 status.go:174] checking status of ha-148567 ...
	I1217 21:04:47.179857  577845 cli_runner.go:164] Run: docker container inspect ha-148567 --format={{.State.Status}}
	I1217 21:04:47.198752  577845 status.go:371] ha-148567 host status = "Stopped" (err=<nil>)
	I1217 21:04:47.198772  577845 status.go:384] host is not running, skipping remaining checks
	I1217 21:04:47.198779  577845 status.go:176] ha-148567 status: &{Name:ha-148567 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 21:04:47.198812  577845 status.go:174] checking status of ha-148567-m02 ...
	I1217 21:04:47.199138  577845 cli_runner.go:164] Run: docker container inspect ha-148567-m02 --format={{.State.Status}}
	I1217 21:04:47.231309  577845 status.go:371] ha-148567-m02 host status = "Stopped" (err=<nil>)
	I1217 21:04:47.231329  577845 status.go:384] host is not running, skipping remaining checks
	I1217 21:04:47.231335  577845 status.go:176] ha-148567-m02 status: &{Name:ha-148567-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 21:04:47.231361  577845 status.go:174] checking status of ha-148567-m04 ...
	I1217 21:04:47.231703  577845 cli_runner.go:164] Run: docker container inspect ha-148567-m04 --format={{.State.Status}}
	I1217 21:04:47.250321  577845 status.go:371] ha-148567-m04 host status = "Stopped" (err=<nil>)
	I1217 21:04:47.250342  577845 status.go:384] host is not running, skipping remaining checks
	I1217 21:04:47.250349  577845 status.go:176] ha-148567-m04 status: &{Name:ha-148567-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (91.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1217 21:05:30.851776  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-148567 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m30.962079856s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (91.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-148567 node add --control-plane --alsologtostderr -v 5: (1m19.220683363s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-148567 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-148567 status --alsologtostderr -v 5: (1.056912601s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.03018384s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.62s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-150253 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1217 21:08:21.927888  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 21:08:39.732545  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-150253 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (52.610080266s)
--- PASS: TestJSONOutput/start/Command (52.62s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-150253 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-150253 --output=json --user=testUser: (5.873042952s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-989053 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-989053 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (98.793857ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2c8557b6-5dfe-4dd4-bbc1-05a0b8c52b17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-989053] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0563f5d0-f2fb-4b0b-86c2-b2b96ab79bf6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21808"}}
	{"specversion":"1.0","id":"89a36714-1e85-49ef-a6fc-7a5784d2669a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"55d9ad26-922e-4e75-83e2-1747e35b4f0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig"}}
	{"specversion":"1.0","id":"6d75f44d-1e89-4584-b76c-557e1bcfc8c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube"}}
	{"specversion":"1.0","id":"4ee2f6fb-a7fe-45b3-ae90-3b6f56444bd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d945680c-2e79-45e6-ba0d-2fd9b7b15a9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8d9c734d-afc9-402f-9538-5446f3174835","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-989053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-989053
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.01s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-869087 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-869087 --network=: (37.812767458s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-869087" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-869087
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-869087: (2.168229861s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.01s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.15s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-055446 --network=bridge
E1217 21:09:44.994093  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-055446 --network=bridge: (32.935406749s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-055446" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-055446
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-055446: (2.188157215s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.15s)

                                                
                                    
x
+
TestKicExistingNetwork (34.66s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1217 21:10:16.124906  488412 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1217 21:10:16.141063  488412 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1217 21:10:16.141147  488412 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1217 21:10:16.141165  488412 cli_runner.go:164] Run: docker network inspect existing-network
W1217 21:10:16.157940  488412 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1217 21:10:16.157972  488412 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1217 21:10:16.157988  488412 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1217 21:10:16.158089  488412 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1217 21:10:16.177302  488412 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-254979ff9069 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:ac:44:40:5e:f0} reservation:<nil>}
I1217 21:10:16.178259  488412 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d41310}
I1217 21:10:16.178299  488412 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1217 21:10:16.178351  488412 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1217 21:10:16.246942  488412 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-984919 --network=existing-network
E1217 21:10:30.851730  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-984919 --network=existing-network: (32.353221237s)
helpers_test.go:176: Cleaning up "existing-network-984919" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-984919
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-984919: (2.151178958s)
I1217 21:10:50.768525  488412 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.66s)

                                                
                                    
x
+
TestKicCustomSubnet (35.59s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-203998 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-203998 --subnet=192.168.60.0/24: (33.354500993s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-203998 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-203998" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-203998
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-203998: (2.211263087s)
--- PASS: TestKicCustomSubnet (35.59s)

                                                
                                    
x
+
TestKicStaticIP (34.89s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-637156 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-637156 --static-ip=192.168.200.200: (32.276678042s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-637156 ip
helpers_test.go:176: Cleaning up "static-ip-637156" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-637156
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-637156: (2.442465609s)
--- PASS: TestKicStaticIP (34.89s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (78s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-892413 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-892413 --driver=docker  --container-runtime=crio: (36.962808267s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-895293 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-895293 --driver=docker  --container-runtime=crio: (35.000530366s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-892413
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-895293
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-895293" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-895293
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-895293: (2.166201945s)
helpers_test.go:176: Cleaning up "first-892413" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-892413
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-892413: (2.426276286s)
--- PASS: TestMinikubeProfile (78.00s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-374267 --memory=3072 --mount-string /tmp/TestMountStartserial809754038/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1217 21:13:21.928603  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-374267 --memory=3072 --mount-string /tmp/TestMountStartserial809754038/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.831929992s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-374267 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-376581 --memory=3072 --mount-string /tmp/TestMountStartserial809754038/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-376581 --memory=3072 --mount-string /tmp/TestMountStartserial809754038/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.280657841s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-376581 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-374267 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-374267 --alsologtostderr -v=5: (1.740214891s)
--- PASS: TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-376581 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-376581
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-376581: (1.325652182s)
--- PASS: TestMountStart/serial/Stop (1.33s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.16s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-376581
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-376581: (7.157950111s)
--- PASS: TestMountStart/serial/RestartStopped (8.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-376581 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (79.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-956658 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1217 21:13:56.660824  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-956658 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m18.891530994s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (79.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-956658 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-956658 -- rollout status deployment/busybox
E1217 21:15:13.926364  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-956658 -- rollout status deployment/busybox: (2.98495414s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-956658 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-956658 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-956658 -- exec busybox-7b57f96db7-sqjjk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-956658 -- exec busybox-7b57f96db7-t8c9v -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-956658 -- exec busybox-7b57f96db7-sqjjk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-956658 -- exec busybox-7b57f96db7-t8c9v -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-956658 -- exec busybox-7b57f96db7-sqjjk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-956658 -- exec busybox-7b57f96db7-t8c9v -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.74s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-956658 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-956658 -- exec busybox-7b57f96db7-sqjjk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-956658 -- exec busybox-7b57f96db7-sqjjk -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-956658 -- exec busybox-7b57f96db7-t8c9v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-956658 -- exec busybox-7b57f96db7-t8c9v -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (31.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-956658 -v=5 --alsologtostderr
E1217 21:15:30.851932  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-956658 -v=5 --alsologtostderr: (30.929384947s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (31.61s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-956658 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 cp testdata/cp-test.txt multinode-956658:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 ssh -n multinode-956658 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 cp multinode-956658:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3753182169/001/cp-test_multinode-956658.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 ssh -n multinode-956658 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 cp multinode-956658:/home/docker/cp-test.txt multinode-956658-m02:/home/docker/cp-test_multinode-956658_multinode-956658-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 ssh -n multinode-956658 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 ssh -n multinode-956658-m02 "sudo cat /home/docker/cp-test_multinode-956658_multinode-956658-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 cp multinode-956658:/home/docker/cp-test.txt multinode-956658-m03:/home/docker/cp-test_multinode-956658_multinode-956658-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 ssh -n multinode-956658 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 ssh -n multinode-956658-m03 "sudo cat /home/docker/cp-test_multinode-956658_multinode-956658-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 cp testdata/cp-test.txt multinode-956658-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 ssh -n multinode-956658-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 cp multinode-956658-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3753182169/001/cp-test_multinode-956658-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 ssh -n multinode-956658-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 cp multinode-956658-m02:/home/docker/cp-test.txt multinode-956658:/home/docker/cp-test_multinode-956658-m02_multinode-956658.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 ssh -n multinode-956658-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 ssh -n multinode-956658 "sudo cat /home/docker/cp-test_multinode-956658-m02_multinode-956658.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 cp multinode-956658-m02:/home/docker/cp-test.txt multinode-956658-m03:/home/docker/cp-test_multinode-956658-m02_multinode-956658-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 ssh -n multinode-956658-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 ssh -n multinode-956658-m03 "sudo cat /home/docker/cp-test_multinode-956658-m02_multinode-956658-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 cp testdata/cp-test.txt multinode-956658-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 ssh -n multinode-956658-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 cp multinode-956658-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3753182169/001/cp-test_multinode-956658-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 ssh -n multinode-956658-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 cp multinode-956658-m03:/home/docker/cp-test.txt multinode-956658:/home/docker/cp-test_multinode-956658-m03_multinode-956658.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 ssh -n multinode-956658-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 ssh -n multinode-956658 "sudo cat /home/docker/cp-test_multinode-956658-m03_multinode-956658.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 cp multinode-956658-m03:/home/docker/cp-test.txt multinode-956658-m02:/home/docker/cp-test_multinode-956658-m03_multinode-956658-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 ssh -n multinode-956658-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 ssh -n multinode-956658-m02 "sudo cat /home/docker/cp-test_multinode-956658-m03_multinode-956658-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-956658 node stop m03: (1.328660509s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-956658 status: exit status 7 (525.829664ms)

                                                
                                                
-- stdout --
	multinode-956658
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-956658-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-956658-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-956658 status --alsologtostderr: exit status 7 (520.727003ms)

                                                
                                                
-- stdout --
	multinode-956658
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-956658-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-956658-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 21:16:01.405560  629509 out.go:360] Setting OutFile to fd 1 ...
	I1217 21:16:01.405794  629509 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:16:01.405829  629509 out.go:374] Setting ErrFile to fd 2...
	I1217 21:16:01.405849  629509 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:16:01.406144  629509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 21:16:01.406371  629509 out.go:368] Setting JSON to false
	I1217 21:16:01.406436  629509 mustload.go:66] Loading cluster: multinode-956658
	I1217 21:16:01.406522  629509 notify.go:221] Checking for updates...
	I1217 21:16:01.406894  629509 config.go:182] Loaded profile config "multinode-956658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 21:16:01.406939  629509 status.go:174] checking status of multinode-956658 ...
	I1217 21:16:01.407900  629509 cli_runner.go:164] Run: docker container inspect multinode-956658 --format={{.State.Status}}
	I1217 21:16:01.430221  629509 status.go:371] multinode-956658 host status = "Running" (err=<nil>)
	I1217 21:16:01.430244  629509 host.go:66] Checking if "multinode-956658" exists ...
	I1217 21:16:01.430567  629509 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-956658
	I1217 21:16:01.450711  629509 host.go:66] Checking if "multinode-956658" exists ...
	I1217 21:16:01.451034  629509 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 21:16:01.451171  629509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-956658
	I1217 21:16:01.473171  629509 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33303 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/multinode-956658/id_rsa Username:docker}
	I1217 21:16:01.565238  629509 ssh_runner.go:195] Run: systemctl --version
	I1217 21:16:01.571896  629509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 21:16:01.584803  629509 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 21:16:01.646744  629509 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-17 21:16:01.636517389 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 21:16:01.647312  629509 kubeconfig.go:125] found "multinode-956658" server: "https://192.168.67.2:8443"
	I1217 21:16:01.647354  629509 api_server.go:166] Checking apiserver status ...
	I1217 21:16:01.647409  629509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 21:16:01.659524  629509 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1272/cgroup
	I1217 21:16:01.668526  629509 api_server.go:182] apiserver freezer: "2:freezer:/docker/64adce74e3ae9e145825b5d6973a2ed1b70abd2f6469c5062c5dc4e5c49f3a71/crio/crio-aa75e8d255a02cbb2d55f180e72208d98f07dafa0f0955a9a09a0d0d1476378a"
	I1217 21:16:01.668613  629509 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/64adce74e3ae9e145825b5d6973a2ed1b70abd2f6469c5062c5dc4e5c49f3a71/crio/crio-aa75e8d255a02cbb2d55f180e72208d98f07dafa0f0955a9a09a0d0d1476378a/freezer.state
	I1217 21:16:01.677067  629509 api_server.go:204] freezer state: "THAWED"
	I1217 21:16:01.677094  629509 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1217 21:16:01.685313  629509 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1217 21:16:01.685344  629509 status.go:463] multinode-956658 apiserver status = Running (err=<nil>)
	I1217 21:16:01.685356  629509 status.go:176] multinode-956658 status: &{Name:multinode-956658 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 21:16:01.685372  629509 status.go:174] checking status of multinode-956658-m02 ...
	I1217 21:16:01.685682  629509 cli_runner.go:164] Run: docker container inspect multinode-956658-m02 --format={{.State.Status}}
	I1217 21:16:01.702396  629509 status.go:371] multinode-956658-m02 host status = "Running" (err=<nil>)
	I1217 21:16:01.702433  629509 host.go:66] Checking if "multinode-956658-m02" exists ...
	I1217 21:16:01.702737  629509 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-956658-m02
	I1217 21:16:01.720345  629509 host.go:66] Checking if "multinode-956658-m02" exists ...
	I1217 21:16:01.720670  629509 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 21:16:01.720721  629509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-956658-m02
	I1217 21:16:01.738339  629509 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33308 SSHKeyPath:/home/jenkins/minikube-integration/21808-485134/.minikube/machines/multinode-956658-m02/id_rsa Username:docker}
	I1217 21:16:01.835314  629509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 21:16:01.849443  629509 status.go:176] multinode-956658-m02 status: &{Name:multinode-956658-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1217 21:16:01.849479  629509 status.go:174] checking status of multinode-956658-m03 ...
	I1217 21:16:01.849798  629509 cli_runner.go:164] Run: docker container inspect multinode-956658-m03 --format={{.State.Status}}
	I1217 21:16:01.870658  629509 status.go:371] multinode-956658-m03 host status = "Stopped" (err=<nil>)
	I1217 21:16:01.870685  629509 status.go:384] host is not running, skipping remaining checks
	I1217 21:16:01.870693  629509 status.go:176] multinode-956658-m03 status: &{Name:multinode-956658-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-956658 node start m03 -v=5 --alsologtostderr: (7.662331407s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-956658
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-956658
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-956658: (25.121864799s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-956658 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-956658 --wait=true -v=5 --alsologtostderr: (47.950574245s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-956658
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.20s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-956658 node delete m03: (4.955974661s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-956658 stop: (23.828165211s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-956658 status: exit status 7 (111.39779ms)

                                                
                                                
-- stdout --
	multinode-956658
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-956658-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-956658 status --alsologtostderr: exit status 7 (91.53134ms)

                                                
                                                
-- stdout --
	multinode-956658
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-956658-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 21:17:53.156939  637466 out.go:360] Setting OutFile to fd 1 ...
	I1217 21:17:53.157051  637466 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:17:53.157061  637466 out.go:374] Setting ErrFile to fd 2...
	I1217 21:17:53.157067  637466 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:17:53.157312  637466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 21:17:53.157503  637466 out.go:368] Setting JSON to false
	I1217 21:17:53.157530  637466 mustload.go:66] Loading cluster: multinode-956658
	I1217 21:17:53.157648  637466 notify.go:221] Checking for updates...
	I1217 21:17:53.157934  637466 config.go:182] Loaded profile config "multinode-956658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 21:17:53.157957  637466 status.go:174] checking status of multinode-956658 ...
	I1217 21:17:53.158463  637466 cli_runner.go:164] Run: docker container inspect multinode-956658 --format={{.State.Status}}
	I1217 21:17:53.178670  637466 status.go:371] multinode-956658 host status = "Stopped" (err=<nil>)
	I1217 21:17:53.178695  637466 status.go:384] host is not running, skipping remaining checks
	I1217 21:17:53.178702  637466 status.go:176] multinode-956658 status: &{Name:multinode-956658 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 21:17:53.178735  637466 status.go:174] checking status of multinode-956658-m02 ...
	I1217 21:17:53.179050  637466 cli_runner.go:164] Run: docker container inspect multinode-956658-m02 --format={{.State.Status}}
	I1217 21:17:53.197729  637466 status.go:371] multinode-956658-m02 host status = "Stopped" (err=<nil>)
	I1217 21:17:53.197750  637466 status.go:384] host is not running, skipping remaining checks
	I1217 21:17:53.197775  637466 status.go:176] multinode-956658-m02 status: &{Name:multinode-956658-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-956658 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1217 21:18:21.928667  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-956658 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (47.80975682s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-956658 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.50s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-956658
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-956658-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-956658-m02 --driver=docker  --container-runtime=crio: exit status 14 (92.895275ms)

                                                
                                                
-- stdout --
	* [multinode-956658-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-956658-m02' is duplicated with machine name 'multinode-956658-m02' in profile 'multinode-956658'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-956658-m03 --driver=docker  --container-runtime=crio
E1217 21:18:56.661805  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-956658-m03 --driver=docker  --container-runtime=crio: (34.252447832s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-956658
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-956658: exit status 80 (354.224866ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-956658 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-956658-m03 already exists in multinode-956658-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-956658-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-956658-m03: (2.077198761s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.83s)

                                                
                                    
x
+
TestPreload (125.34s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-520433 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-520433 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (1m0.284130134s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-520433 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-520433 image pull gcr.io/k8s-minikube/busybox: (2.208060904s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-520433
E1217 21:20:30.851719  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-520433: (5.937360091s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-520433 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-520433 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (54.118380706s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-520433 image list
helpers_test.go:176: Cleaning up "test-preload-520433" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-520433
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-520433: (2.551380509s)
--- PASS: TestPreload (125.34s)

                                                
                                    
x
+
TestScheduledStopUnix (108.05s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-533493 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-533493 --memory=3072 --driver=docker  --container-runtime=crio: (30.772257335s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-533493 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 21:21:59.026261  652197 out.go:360] Setting OutFile to fd 1 ...
	I1217 21:21:59.026450  652197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:21:59.026482  652197 out.go:374] Setting ErrFile to fd 2...
	I1217 21:21:59.026502  652197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:21:59.026825  652197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 21:21:59.027128  652197 out.go:368] Setting JSON to false
	I1217 21:21:59.027285  652197 mustload.go:66] Loading cluster: scheduled-stop-533493
	I1217 21:21:59.027827  652197 config.go:182] Loaded profile config "scheduled-stop-533493": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 21:21:59.027989  652197 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/scheduled-stop-533493/config.json ...
	I1217 21:21:59.028276  652197 mustload.go:66] Loading cluster: scheduled-stop-533493
	I1217 21:21:59.028488  652197 config.go:182] Loaded profile config "scheduled-stop-533493": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-533493 -n scheduled-stop-533493
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-533493 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 21:21:59.474910  652287 out.go:360] Setting OutFile to fd 1 ...
	I1217 21:21:59.475091  652287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:21:59.475124  652287 out.go:374] Setting ErrFile to fd 2...
	I1217 21:21:59.475145  652287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:21:59.475477  652287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 21:21:59.475887  652287 out.go:368] Setting JSON to false
	I1217 21:21:59.476138  652287 daemonize_unix.go:73] killing process 652219 as it is an old scheduled stop
	I1217 21:21:59.479730  652287 mustload.go:66] Loading cluster: scheduled-stop-533493
	I1217 21:21:59.480200  652287 config.go:182] Loaded profile config "scheduled-stop-533493": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 21:21:59.480290  652287 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/scheduled-stop-533493/config.json ...
	I1217 21:21:59.480508  652287 mustload.go:66] Loading cluster: scheduled-stop-533493
	I1217 21:21:59.480628  652287 config.go:182] Loaded profile config "scheduled-stop-533493": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1217 21:21:59.485612  488412 retry.go:31] will retry after 88.753µs: open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/scheduled-stop-533493/pid: no such file or directory
I1217 21:21:59.486772  488412 retry.go:31] will retry after 160.933µs: open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/scheduled-stop-533493/pid: no such file or directory
I1217 21:21:59.487902  488412 retry.go:31] will retry after 308.947µs: open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/scheduled-stop-533493/pid: no such file or directory
I1217 21:21:59.489025  488412 retry.go:31] will retry after 280.612µs: open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/scheduled-stop-533493/pid: no such file or directory
I1217 21:21:59.490128  488412 retry.go:31] will retry after 365.744µs: open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/scheduled-stop-533493/pid: no such file or directory
I1217 21:21:59.491668  488412 retry.go:31] will retry after 1.123253ms: open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/scheduled-stop-533493/pid: no such file or directory
I1217 21:21:59.493874  488412 retry.go:31] will retry after 1.4604ms: open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/scheduled-stop-533493/pid: no such file or directory
I1217 21:21:59.496081  488412 retry.go:31] will retry after 2.199456ms: open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/scheduled-stop-533493/pid: no such file or directory
I1217 21:21:59.499280  488412 retry.go:31] will retry after 2.839831ms: open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/scheduled-stop-533493/pid: no such file or directory
I1217 21:21:59.502490  488412 retry.go:31] will retry after 5.410475ms: open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/scheduled-stop-533493/pid: no such file or directory
I1217 21:21:59.508741  488412 retry.go:31] will retry after 8.037871ms: open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/scheduled-stop-533493/pid: no such file or directory
I1217 21:21:59.519158  488412 retry.go:31] will retry after 12.302084ms: open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/scheduled-stop-533493/pid: no such file or directory
I1217 21:21:59.531711  488412 retry.go:31] will retry after 11.23399ms: open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/scheduled-stop-533493/pid: no such file or directory
I1217 21:21:59.543914  488412 retry.go:31] will retry after 19.613117ms: open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/scheduled-stop-533493/pid: no such file or directory
I1217 21:21:59.564143  488412 retry.go:31] will retry after 35.117393ms: open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/scheduled-stop-533493/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-533493 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-533493 -n scheduled-stop-533493
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-533493
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-533493 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 21:22:25.412963  652763 out.go:360] Setting OutFile to fd 1 ...
	I1217 21:22:25.413072  652763 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:22:25.413082  652763 out.go:374] Setting ErrFile to fd 2...
	I1217 21:22:25.413087  652763 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 21:22:25.413372  652763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-485134/.minikube/bin
	I1217 21:22:25.413643  652763 out.go:368] Setting JSON to false
	I1217 21:22:25.413754  652763 mustload.go:66] Loading cluster: scheduled-stop-533493
	I1217 21:22:25.414112  652763 config.go:182] Loaded profile config "scheduled-stop-533493": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 21:22:25.414188  652763 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/scheduled-stop-533493/config.json ...
	I1217 21:22:25.414381  652763 mustload.go:66] Loading cluster: scheduled-stop-533493
	I1217 21:22:25.414504  652763 config.go:182] Loaded profile config "scheduled-stop-533493": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-533493
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-533493: exit status 7 (73.534831ms)

                                                
                                                
-- stdout --
	scheduled-stop-533493
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-533493 -n scheduled-stop-533493
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-533493 -n scheduled-stop-533493: exit status 7 (69.868341ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-533493" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-533493
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-533493: (5.680742489s)
--- PASS: TestScheduledStopUnix (108.05s)

                                                
                                    
x
+
TestInsufficientStorage (10.01s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-367104 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
E1217 21:23:21.927891  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-367104 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.454442214s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"05a9090b-69a1-4ba4-951f-2c130e326de6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-367104] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"92e3c6e9-92cf-4f49-bef2-943ca8dce1c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21808"}}
	{"specversion":"1.0","id":"e69ab91b-696f-475a-ace2-d5031f8a1057","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0d356517-76c2-4dbd-87ee-ac585e469d02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig"}}
	{"specversion":"1.0","id":"60c1b0ef-48d6-4074-adef-eaec09838536","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube"}}
	{"specversion":"1.0","id":"e6a6d32d-b184-4d60-a3f1-2d564a385615","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e1bccbf4-a582-42d2-9a2d-6b3dfd9de65e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b60a0f84-ceb5-4990-a9bd-05623fbee31b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"4573ab40-c49f-4d84-b357-4b2aa406b0a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"bc0da320-7bbd-4aef-85c6-7b45b7bf8fe8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a91d4f3-1d83-4c2c-a7f6-0ff0df49648c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"090ed4dd-7878-466a-b000-3d809310dec7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-367104\" primary control-plane node in \"insufficient-storage-367104\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1a6e59f1-dfc4-4e00-938a-f582bdded58c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765661130-22141 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"75e68381-8441-43a7-b923-829f909d0a6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"30dba26d-7eb3-42a3-b1d4-5cb801ea03a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-367104 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-367104 --output=json --layout=cluster: exit status 7 (289.128593ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-367104","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-367104","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 21:23:23.979933  654645 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-367104" does not appear in /home/jenkins/minikube-integration/21808-485134/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-367104 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-367104 --output=json --layout=cluster: exit status 7 (304.313447ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-367104","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-367104","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 21:23:24.283040  654713 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-367104" does not appear in /home/jenkins/minikube-integration/21808-485134/kubeconfig
	E1217 21:23:24.293631  654713 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/insufficient-storage-367104/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-367104" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-367104
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-367104: (1.958439284s)
--- PASS: TestInsufficientStorage (10.01s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (300.33s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2994946745 start -p running-upgrade-206976 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1217 21:31:53.927828  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2994946745 start -p running-upgrade-206976 --memory=3072 --vm-driver=docker  --container-runtime=crio: (32.04508326s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-206976 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1217 21:33:21.930188  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 21:33:56.661131  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 21:35:30.852139  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-206976 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.794136572s)
helpers_test.go:176: Cleaning up "running-upgrade-206976" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-206976
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-206976: (1.976785397s)
--- PASS: TestRunningBinaryUpgrade (300.33s)

                                                
                                    
x
+
TestMissingContainerUpgrade (118.87s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1646420340 start -p missing-upgrade-783783 --memory=3072 --driver=docker  --container-runtime=crio
E1217 21:23:56.661247  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1646420340 start -p missing-upgrade-783783 --memory=3072 --driver=docker  --container-runtime=crio: (1m6.950595113s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-783783
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-783783
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-783783 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1217 21:25:19.733984  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-783783 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.354930713s)
helpers_test.go:176: Cleaning up "missing-upgrade-783783" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-783783
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-783783: (2.005185418s)
--- PASS: TestMissingContainerUpgrade (118.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-185508 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-185508 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (103.963782ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-185508] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-485134/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-485134/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (48.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-185508 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-185508 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (47.766762454s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-185508 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (48.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (111.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-185508 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-185508 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m48.856984962s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-185508 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-185508 status -o json: exit status 2 (309.637197ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-185508","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-185508
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-185508: (2.406525938s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (111.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-185508 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-185508 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (10.180677491s)
--- PASS: TestNoKubernetes/serial/Start (10.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21808-485134/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-185508 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-185508 "sudo systemctl is-active --quiet service kubelet": exit status 1 (339.187358ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-185508
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-185508: (1.298634469s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-185508 --driver=docker  --container-runtime=crio
E1217 21:26:24.997323  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-185508 --driver=docker  --container-runtime=crio: (7.162040913s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-185508 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-185508 "sudo systemctl is-active --quiet service kubelet": exit status 1 (268.803015ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (299.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3354870768 start -p stopped-upgrade-993252 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3354870768 start -p stopped-upgrade-993252 --memory=3072 --vm-driver=docker  --container-runtime=crio: (31.768764801s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3354870768 -p stopped-upgrade-993252 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3354870768 -p stopped-upgrade-993252 stop: (1.24478857s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-993252 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1217 21:28:21.928421  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-655452/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 21:28:56.661452  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/addons-052340/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 21:30:30.851830  488412 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-485134/.minikube/profiles/functional-643319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-993252 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.163993393s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (299.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-993252
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-993252: (1.797611128s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.80s)

                                                
                                    
x
+
TestPause/serial/Start (52.8s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-918446 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-918446 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (52.797082295s)
--- PASS: TestPause/serial/Start (52.80s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (26.92s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-918446 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-918446 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.896311789s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (26.92s)

                                                
                                    

Test skip (36/316)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
29 TestDownloadOnlyKic 0.54
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
148 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
150 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-133846 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-133846" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-133846
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
Copied to clipboard